Improving De Novo Molecular Design with Curriculum Learning

20 October 2021, Version 1
This content is a preprint and has not undergone peer review at the time of posting.


Reinforcement learning (RL) is a powerful paradigm that has gained popularity across multiple domains. However, applying RL may come at a cost of multiple interactions between the agent and the environment. This cost can be especially pronounced when the single feedback from the environment is slow or computationally expensive, causing extensive periods of nonproductivity. Curriculum learning (CL) provides a suitable alternative by arranging a sequence of tasks of increasing complexity with the aim of reducing the overall cost of learning. Here, we demonstrate the application of CL for drug discovery. We implement CL in the de novo design platform, REINVENT, and apply it on illustrative de novo molecular design problems of different complexity. The results show both accelerated learning and a positive impact on the quality of the output when compared to standard policy based RL. To our knowledge, this is the first application of CL for the purposes of de novo molecular design. The code is freely available at


Curriculum Learning
De Novo Design
Drug Design
Deep Generative Model
Computational Chemistry

Supplementary materials

Supporting Information
Supporting figures and tables.

Supplementary weblinks


Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.