Abstract
In recent years, NLP approaches to machine learning, most prominently deep neural network-based transformers, have been applied to molecular classification and regression tasks for molecular categorisation and property prediction. However, models based on deep neural networks are often slow and thus expensive to train and usually require significant hyperparameter-tuning efforts. Recently, a low-resource and universal alternative to deep learning approaches based on Gzip compression for text classification has been proposed that reportedly outperforms BERT on all evaluated out-of-distribution tasks. Here, we adapt the proposed method to support multiprocessing, multi-class classification, class-weighing, and regression and apply it to classification and regression tasks on various collections of molecules from the organic chemistry and drug design domains. Our results show that the method can indeed be used to classify and predict properties of molecules and can reach performances of large-scale chemical language models in a subset of tasks.
Supplementary weblinks
Title
GitHub Repostiroy
Description
The GitHub repository containing all the code and data described in the manuscript.
Actions
View