BigBind: Learning from Nonstructural Data for Structure-Based Virtual Screening



Recent attempts at utilizing deep learning for structure-based virtual screening have focused on training models to predict binding affinity from protein-ligand complexes with known crystal structures. The PDBbind dataset is the current standard for training such models, but its small size (less than 20K binding affinity measurements) leads to models failing to generalize to new targets, and model performance is typically on par with those trained with only ligand information. The CrossDocked dataset expands binding pose data for protein-ligand complexes but does not introduce new affinity data. ChEMBL, on the other hand, contains a wealth of binding affinity information but contains no information about the binding poses. We introduce BigBind, a dataset that maps ChEMBL activity data to protein targets from CrossDocked. This dataset comprises 851K ligand binding affinities and 3D pocket structures. After augmenting this dataset with an equal number of putative inactives for each target, we train BANANA (BAsic NeurAl Network for binding Affinity) to classify actives from inactives. The resulting model achieved an AUC of 0.72 on BigBind’s test set, while a ligand-only model achieved an AUC of 0.64. Our model achieves competitive performance on the LIT-PCBA benchmark (median EF1% 2.06) while running 16,000 times faster than molecular docking with GNINA. Notably, we achieve a state-of-the-art EF1% of 4.95 when we use BANANA to filter out 90% of the compounds prior to docking with GNINA. We hope that BANANA and future models trained on this dataset will prove useful for prospective virtual screening tasks.

Version notes

Added NIH grant


Supplementary material

Supporting Information: BigBind: Learning from Nonstructural Data for Structure-Based Virtual Screening
Supporting information for the BigBind paper.