Evaluating Machine Learning Models for Molecular Property Prediction: Performance and Robustness on Out-of-Distribution Data

01 March 2025, Version 1
This content is a preprint and has not undergone peer review at the time of posting.

Abstract

Today, machine learning models are employed extensively to predict the physicochemical and biological properties of molecules. Their performance is typically evaluated on in-distribution (ID) data, i.e., data originating from the same distribution as the training data. However, the real-world applications of such models often involve molecules that are more distant from the training data, which necessitates assessing their performance on out-of-distribution (OOD) data. In this work, we investigate and evaluate the performance of twelve machine learning models, including classical approaches like random forests, as well as graph neural network (GNN) methods, such as message-passing graph neural networks, across eight data sets using seven splitting strategies for OOD data generation. First, we investigate what constitutes OOD data in the molecular domain for bioactivity and ADMET prediction tasks. In contrast to the common point of view, we show that both classical machine learning and GNN models work well (not substantially different from random splitting) on data split based on Bemis-Murcko scaffolds. Splitting based on chemical similarity clustering (K-means clustering using ECFP4 fingerprints) poses the hardest challenge for both types of models. Second, we investigate the extent to which ID and OOD performance have a positive linear relationship. If a positive correlation holds, models with the best performance on the ID data can be selected with the promise of having the best performance on OOD data. We show that the strength of this linear relationship is strongly related to how the OOD data is generated, i.e., which splitting strategies are used for generating OOD data. While the correlation between ID and OOD performance for scaffold splitting is strong (Pearson $r\sim0.9$), this correlation decreases significantly for cluster-based splitting (Pearson $r\sim0.4$). Therefore, the relationship can be more nuanced, and a strong positive correlation is not guaranteed for all OOD scenarios. These findings suggest that OOD performance evaluation and model selection should be carefully aligned with the intended application domain.

Keywords

molecular property prediction
out-of-distribution data
machine learning
benchmarking
graph neural networks
dataset splitting

Supplementary materials

Title
Description
Actions
Title
Supporting information
Description
Detailed protocols for data set preprocessing, complete model training parameters and hyperparameter selection methodology, statistical properties and characteristics of all data sets, comprehensive tables of accuracy metrics for all experiments described in the main manuscript, additional visualizations of model performance.
Actions

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.