Abstract
Graph Neural Networks (GNNs) have revolutionized molecular property prediction by leveraging graph-based representations, yet their opaque decision-making processes hinder broader adoption in drug discovery. This study introduces the Activity-Cliff-Explanation-Supervised GNN (ACES-GNN) framework, designed to simultaneously improve predictive accuracy and interpretability by integrating explanation supervision for activity cliffs (ACs) into GNN training. ACs, defined by structurally similar molecules with significant potency differences, pose challenges for traditional models due to their reliance on shared structural features. By aligning model attributions with chemist-friendly interpretations, the ACES-GNN framework bridges the gap between prediction and explanation. Validated across 30 pharmacological targets, ACES-GNN consistently enhances both predictive accuracy and attribution quality compared to baseline methods. Our results demonstrate a strong correlation between improved predictions and accurate explanations, offering a robust and adaptable framework for addressing the "intra-scaffold" generalization problem. This work underscores the potential of explanation-guided learning to advance interpretable artificial intelligence in molecular modeling and drug discovery.
Supplementary materials
Title
Supplementary Information for ACES-GNN: Can Graph Neural Network Learn to Explain Activity Cliffs?
Description
Supplementary Table S1 to S12
Supplementary Figure S1 to S5
Actions