Abstract
Accurately predicting activation energies is crucial for understanding chemical reactions and modeling complex reaction systems. However, the high computational cost of quantum chemistry methods often limits the feasibility of large-scale studies, leading to a scarcity of high-quality activation energy data. In this work, we explore and compare three innovative approaches—transfer learning, delta learning, and feature engineering—to enhance the accuracy of activation energy predictions using graph neural networks, specifically focusing on methods that incorporate low-cost, low-level computational data. Using the Chemprop model, we systematically evaluated how these methods leverage data from semiempirical quantum mechanical (SQM) calculations to improve predictions. Delta learning, which adjusts low-level SQM activation energies to align with high-level CCSD(T)-F12a targets, emerged as the most effective method, achieving high accuracy with substantially reduced high-level data requirements. Notably, delta learning trained with just 20%–30% of high-level data matched or exceeded the performance of other methods trained with full datasets, making it advantageous in data-scarce scenarios. However, its reliance on transition state searches imposes significant computational demands during model application. Transfer learning, which pretrains models on large datasets of low-level data, provided mixed results, particularly when there was a mismatch in the reaction distributions between the training and target datasets. Feature engineering, which involves adding computed molecular properties as input features, showed modest gains, particularly when incorporating thermodynamic properties. Our study highlights the trade-offs between accuracy and computational demand in selecting the best approach for enhancing activation energy predictions. These insights provide valuable guidelines for researchers aiming to apply machine learning in chemical reaction engineering, helping to balance accuracy with resource constraints.
Supplementary materials
Title
Supporting Information: Enhancing Activation Energy Predictions under Data Constraints Using Machine Learning
Description
This supporting information provides additional details for the study "Enhancing Activation Energy Predictions under Data Constraints Using Machine Learning." The document outlines the hyperparameter optimization performed for the Chemprop and other models. It also presents the classification of reaction types from the CCSD(T)-F12a and RGD1 databases using Reaction Mechanism Generator templates. All model parity plots and deviation plots are presented. To evaluate Transfer learning using reaction energy or activation energy, we present the comparison in the supporting information, too. In addition, several models' architectures were evaluated, including GNN and non-GNN models. As for delta learning, results for AM1 and PM3 semi-empirical methods are documented. Lastly, we present additional computation detail for HSAB descriptors and the total computation cost using SQM methods.
Actions