We explore the impact of adding attention to generative VAE models for molecular design. Four model types are compared: a simple recurrent VAE (RNN), a recurrent VAE with an added attention layer (RNNAttn), a transformer VAE (TransVAE) and the previous state-of-the-art (MosesVAE). The models are assessed based on their effect on the organization of the latent space (i.e. latent memory) and their ability to generate samples that are valid and novel. Additionally, the Shannon information entropy is used to measure the complexity of the latent memory in an information bottleneck theoretical framework and we define a novel metric to assess the extent to which models explore chemical phase space. All three models are trained on millions of molecules from either the ZINC or PubChem datasets. We find that both RNNAttn and TransVAE models perform substantially better when tasked with accurately reconstructing input SMILES strings than the MosesVAE or RNN models, particularly for larger molecules up to ~700 Da. The TransVAE learns a complex “molecular grammar” that includes detailed molecular substructures and high-level structural and atomic relationships. The RNNAttn models learn the most efficient compression of the input data while still maintaining good performance. The complexity of the compressed representation learned by each model type increases in the order of MosesVAE < RNNAttn < RNN < TransVAE. We find that there is an unavoidable tradeoff between model exploration and validity that is a function of the complexity of the latent memory. However, novel sampling schemes may be used that optimize this tradeoff and allow us to utilize the information-dense representations learned by the transformer in spite of their complexity.