Abstract
In recent years, natural language processing approaches to machine learning, most prominently deep neural network-based transformers, have been extensively applied to molecular classification and regression tasks, including the prediction of pharmacokinetic and quantum-chemical properties. However, models based on deep neural networks generally require extensive training, large training data sets, and resource-consuming hyperparameter tuning. Recently, a low-resource and universal alternative to deep learning approaches based on Gzip compression for text classification has been proposed, which reportedly performs surprisingly well compared to large language models such as BERT, given its conceptually simplistic nature. Here, we adapt the proposed method to support multiprocessing, multi-class classification, class-weighing, regression, and multiple modalities and apply it to classification and regression tasks on various data sets of molecules from the organic chemistry, biochemistry, drug discovery, and material science domains. We further propose converting numerical descriptors into string representations, enabling the integration of language input with domain-informed descriptors. Our results show that the method can be used to classify and predict a variety of properties of molecules or the binding affinity of protein-ligand complexes, can reach the performance of transformers and graph transformers in a subset of tasks, and has the potential for application in information retrieval from large chemical databases.
Supplementary weblinks
Title
GitHub Repostiroy
Description
The GitHub repository containing all the code and data described in the manuscript.
Actions
View