Theoretical and Computational Chemistry

Improving De Novo Molecular Design with Curriculum Learning

Authors

Abstract

Reinforcement learning (RL) is a powerful paradigm that has gained popularity across multiple domains. However, applying RL may come at a cost of multiple interactions between the agent and the environment. This cost can be especially pronounced when the single feedback from the environment is slow or computationally expensive, causing extensive periods of nonproductivity. Curriculum learning (CL) provides a suitable alternative by arranging a sequence of tasks of increasing complexity with the aim of reducing the overall cost of learning. Here, we demonstrate the application of CL for drug discovery. We implement CL in the de novo design platform, REINVENT, and apply it on illustrative de novo molecular design problems of different complexity. The results show both accelerated learning and a positive impact on the quality of the output when compared to standard policy based RL. To our knowledge, this is the first application of CL for the purposes of de novo molecular design. The code is freely available at https://github.com/MolecularAI/Reinvent.

Version notes

Added a general section in the Supporting Information on practical guidelines to applying Curriculum Learning. Added Supporting Information figures to show example 2D structures generated. The figures show that the Curriculum Learning experiments perform scaffold hopping rather than lead optimization. Added a Supporting Information figure showing an experiment where a purposeful sub-optimal curriculum was devised. The result is learning plateau.

Content

Thumbnail image of CL-paper-v2.pdf

Supplementary material

Thumbnail image of CL-paper-SI-v2.pdf
Supporting Information
Supporting figures and tables.

Supplementary weblinks

REINVENT Codebase
Codebase for the paper.
Tutorials
Jupyter notebook tutorials for the paper.