ChemRxiv
These are preliminary reports that have not been peer-reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in news media as established information. For more information, please see our FAQs.
1/1
2 files

Giving Attention to Generative VAE Models for De Novo Molecular Design

preprint
submitted on 06.02.2021, 01:17 and posted on 08.02.2021, 13:11 by Orion Dollar, Nisarg Joshi, David A. C. Beck, Jim Pfaendtner

We explore the impact of adding attention to generative VAE models for molecular design. Four model types are compared: a simple recurrent VAE (RNN), a recurrent VAE with an added attention layer (RNNAttn), a transformer VAE (TransVAE) and the previous state-of-the-art (MosesVAE). The models are assessed based on their effect on the organization of the latent space (i.e. latent memory) and their ability to generate samples that are valid and novel. Additionally, the Shannon information entropy is used to measure the complexity of the latent memory in an information bottleneck theoretical framework and we define a novel metric to assess the extent to which models explore chemical phase space. All three models are trained on millions of molecules from either the ZINC or PubChem datasets. We find that both RNNAttn and TransVAE models perform substantially better when tasked with accurately reconstructing input SMILES strings than the MosesVAE or RNN models, particularly for larger molecules up to ~700 Da. The TransVAE learns a complex “molecular grammar” that includes detailed molecular substructures and high-level structural and atomic relationships. The RNNAttn models learn the most efficient compression of the input data while still maintaining good performance. The complexity of the compressed representation learned by each model type increases in the order of MosesVAE < RNNAttn < RNN < TransVAE. We find that there is an unavoidable tradeoff between model exploration and validity that is a function of the complexity of the latent memory. However, novel sampling schemes may be used that optimize this tradeoff and allow us to utilize the information-dense representations learned by the transformer in spite of their complexity.

History

Email Address of Submitting Author

odollar@uw.edu

Institution

University of Washington

Country

United States

ORCID For Submitting Author

0000-0002-5254-4494

Declaration of Conflict of Interest

There is no conflict of interest

Exports