Abstract
Artificial intelligence, represented by large language models (LLMs), has demonstrated tremendous capabilities in natural language recognition and extraction. To further evaluate the performance of various LLMs in extracting information from academic papers, this study explores the application of LLMs in reticular chemistry, focusing on their effectiveness in generating Q&A datasets and extracting synthesis conditions from scientific literature. The models evaluated include OpenAI's GPT-4 Turbo, Anthropic’s Claude 3 Opus, and Google's Gemini 1.5 Pro. Key results indicate that Claude excelled in providing complete synthesis data, while Gemini outperformed others in accuracy, characterization-free compliance(obedience), and proactive structuring of responses. Although GPT-4 was less effective in quantitative metrics, it demonstrated strong logical reasoning and contextual inference capabilities. Overall, Gemini and Claude achieved the highest scores in accuracy, groundedness, and adherence to prompt requirements, making them suitable benchmarks for future studies. The findings reveal the potential of LLMs to aid in scientific research, particularly in the efficient construction of structured datasets, which can help train models, predict, and assist in the synthesis of new metal-organic frameworks (MOFs).
Supplementary materials
Title
Supporting Information
Description
The number of the selected DOIs for each task; the prompts for extracting synthesis conditions and generating Q&A datasets; the evaluation flowchart for each product in the synthesis condition dataset; an example of Gemini response in the Q&A generating task.
Actions
Supplementary weblinks
Title
Supplementary files
Description
Original datasets generated by LLMs; human-evaluation charts for all LLMs and tasks.
Actions
View