Abstract
Graph Neural Networks (GNNs) are powerful tools for predicting chemical properties, but their black-box nature can limit trust and utility. Explainability through feature attribution and awareness of prediction uncertainty are critical for practical applications, for example in iterative lab-in-the-loop scenarios. We systematically evaluate different post-hoc feature attribution methods and study their integration with epistemic uncertainty quantification in GNNs for chemistry. Our findings reveal a strong synergy: attributing uncertainty to specific input features (atoms or substructures) provides a granular understanding of model confidence and highlights potential data gaps or model limitations. We evaluated several attribution approaches on aqueous solubility and molecular weight prediction tasks, demonstrating that methods like Feature Ablation and Shapley Value Sampling can effectively identify molecular substructures driving prediction and its uncertainty. This combined approach significantly enhances the interpretability and actionable insights derived from chemical GNNs, facilitating the design or more useful models in research and development.