Abstract
Virtual Screening (VS) of vast compound libraries guided by Artificial Intelli-gence (AI) models is a highly productive approach to early drug discovery. Data splitting is crucial for better benchmarking of such AI models. Traditional ran-dom data splits produce similar molecules between training and test sets, conflict-ing with the reality of VS libraries which mostly contain structurally distinct compounds. To tackle this challenge, scaffold split, which groups molecules by shared core structure, and Butina clustering, which clusters molecules by their chemotypes, were proposed. In the present study, however, we show that such splitting methods still introduce high similarities between clusters, leading to overestimated model performance. Our study examined three representative AI models on 60 NCI-60 datasets, each with approximately 33,000 to 54,000 mole-cules tested on a different cancer cell line. Each dataset was split with four meth-ods: random, scaffold, Butina clustering and the more realistic Uniform Manifold Approximation and Projection (UMAP) clustering. Regardless of the models, model performances are much worse with UMAP splits from the results of the 300 models trained and evaluated for each algorithm and split. These robust re-sults demonstrate the need for more realistic data splits to tune, compare, and se-lect models for VS. The rigorous UMAP-clustering splits revealed the model generalization remains a gap when the splitting methods changes. The code to re-produce these results is available at https://github.com/Rong830/UMAP_split_for_VS