Abstract
Retrieving structured materials information from unstructured textual data is essential for data mining and automatically developing comprehensive ontologies. Information extraction is a complex task composed of multiple subtasks and thus often relies on systems of task-specialized language models. A foundation language model can in principle address not only a variety of those subtasks but also a range of domains without the need of generating costly large-scale annotated datasets for each downstream task. While the materials science domain, which is adversely affected by data scarcity, would strongly benefit from this, foundation language models struggle with information extraction subtasks in domain-specific settings. This applies also to the so-called named entity recognition (NER) subtask which aims to detect relevant entity types in natural language.
This work aims to assess whether foundation large language models (LLMs) can successfully perform NER in the materials mechanics and fatigue domain to alleviate the data annotation burden. Specifically, we compare the few-shot prompting of foundation LLMs with the current state-of-the-art, fine-tuned task-specific NER models. The study is performed on two materials fatigue datasets which contain annotations at a comparatively fine-grained level. Both datasets cover adjacent domains to assess how well both NER methodologies generalize when presented with typical domain shifts. Task-specific models are shown to significantly outperform general foundation models. However, the GPT-4 foundation model attains promising F1-scores with the proposed two-stage prompting strategy despite being provided with only ten demonstrations. Under those circumstances, it outperformed task-specific models for some rather general entity types. Different ways onwards to improve foundation LLM-based NER are discussed. Our findings reveal a strong dependence on the quality of few-shot demonstrations in ICL to handle domain-shift. The study also highlights the significance of domain-specific pre-training by comparing task-specific models that differ primarily in their pre-training corpus.
Supplementary weblinks
Title
MaterioMiner Dataset
Description
Static repository of the "MaterioMiner Dataset" which is an ontology-based text mining dataset for extraction of process-structure-property entities
Actions
View