Abstract
In this paper, we propose MolOpt, the first attempt of its kind to use Multi-Agent Reinforcement Learning (MARL) for autonomous molecular geometry optimization (MGO). Typically MGO algorithms are hand-designed, but MolOpt uses MARL to learn a learned optimizer (policy) that can perform MGO without depending on other hand-designed optimizers. We cast MGO as a MARL problem, where each agent corresponds to a single atom in the molecule. MolOpt performs MGO by minimizing the forces on each atom in the molecule. Our experiments demonstrate the generalizing ability of MolOpt for MGO of Propane, Pentane, Heptane, Hexane, and Octane when trained on Ethane, Butane, and Isobutane. In terms of performance, MolOpt outperforms the MDMin optimizer and demonstrates similar performance to the FIRE optimizer. However, it does not surpass the BFGS optimizer. The results demonstrate that MolOpt has the potential to introduce innovative advancements in MGO by providing a novel approach using reinforcement learning (RL), which may open up new research directions for MGO. Overall, this work serves as a proof-of-concept for the potential of MARL in MGO.
Supplementary materials
Title
MolOpt: Autonomous Molecular Geometry Optimization using Multi-Agent Reinforcement Learning
Description
The SI contains results of different variants of MolOpt and benchmark results of MolOpt compared with BFGS, FIRE and MDMin.
Actions