Abstract
The IUPAC (International Union of Pure and Applied Chemistry) nomenclature is a globally recognized unique naming system which assigns names to chemical compounds. As a form of molecular representation closest to natural language, it allows to estimate molecular data in a large-scale pre-trained paradigm by employing machine learning approaches for natural language processing (NLP). Although, SMILES is currently popular molecular representation used by most generative models, different molecular representation is suitable for different scenarios, and considering the advantages of IUPAC in terms of readability, it becomes meaningful to explore the difference of these two different molecular representations for molecular generation and regression/classification tasks. In this paper, we attempt to adapt the capabilities of transformer to a large IUPAC corpus by constructing a GPT-2-like language model named iupacGPT. For each task in addition to the molecular generation, we freeze model parameters and attach trainable lightweight networks to fine tune. The results show that pre-trained iupacGPT can capture general knowledge that can be successfully transferred to the downstream tasks such as molecule generation and binary classification and property regression prediction. What’s more, with a same setup, iupacGPT outperforms the model smilesGPT in term of the downstream tasks. Overall, transformer-like language models pretrained on IUPAC corpora are promising alternatives that obtain more intuitive in terms of interpretability and semantic level than on SMILES corpora, and scale well with the pretraining data size. https://github.com/AspirinCode/iupacGPT
Supplementary weblinks
Title
code
Description
all of the data and code is in the https://github.com/AspirinCode/iupacGPT
Actions
View