Rapid analysis of materials characterization spectra is pivotal for preventing accumulation of unwieldy datasets, thus accelerating subsequent decision-making. However, current methods heavily rely on experience and domain knowledge, which not only proves tedious but also is hard to keep up with the pace of data acquisition. In this context, we introduce a transferable Vision Transformer (ViT) model for identification of materials from their spectra, including XRD and FTIR. First, an optimal ViT model was trained to predict metal organic frameworks (MOFs) from their XRD spectra. It attains prediction accuracies of 70%, 93%, and 94.9% for Top-1, Top-3, and Top-5, respectively, and a shorter training time of 269 seconds in comparison to a convolutional neural network model. The dimension reduction and attention weight map underline its adeptness at capturing relevant features in the XRD spectra for determining the prediction outcome. Moreover, the model can be transferred to a new one for prediction of organic molecules from their FTIR spectra, attaining remarkable Top-1, Top-3, and Top-5 prediction accuracies of 84%, 94.1%, and 96.7%, respectively. The introduced ViT based model would set a new revenue to handling diverse types of spectroscopic data, thus expediting the materials characterization processes.
An Interpretable and Transferrable Vision Transformer Model for Rapid Materials Spectra Classification
29 September 2023, Version 1
This content is a preprint and has not undergone peer review at the time of posting.