Abstract
Transferable Forcefields have been widely used in Molecular Dynamics (MD) simulations. However, once parametrized on a given dataset, it is difficult to refit them on new chemical entities. On the other hand, Machine Learning Forcefields (MLFF) have gained attention for their accuracy and easiness in expanding related Applicability Domain (AD). Nonetheless, their prediction times make them incompatible with High Throughput Virtual Screening (HTVS) campaigns. Following the inverse of the widely adopted approach with transferable Forcefields, whose parameters are derived from QM data of representative molecules and then transferred to broader chemical spaces, we propose a condensation approach taking advantage from massive MLFF parameters prediction and improve 30x computational efficiency without over-sacrificing accuracy. When tested on the public release of the OpenFF Industry Benchmark Season 1 v1.1 dataset, the molecular structures optimized by minimizing the Potential Energy Surface built with condensed FF parameters only show a minor decrease in Root Mean Squared Deviation (RMSD) and Torsion Fingerprint Deviations (TFD) performances compared to those obtained using FF parameters predicted at runtime. To give more context, original Espaloma and its condensed version are evaluated with respect to several well-known transferable forcefields widely used for biomolecular simulations.