Theoretical and Computational Chemistry

Neural Scaling of Deep Chemical Models

Authors

Abstract

Massive scale, both in terms of data availability and computation, enables significant breakthroughs in key application areas of deep learning such as natural language processing (NLP) and computer vision. There is emerging evidence that scale may be a key ingredient in scientific deep learning, but the importance of physical priors in scientific domains makes the strategies and benefits of scaling uncertain. Here, we investigate neural scaling behavior in large chemical models by varying model and dataset sizes over many orders of magnitude, studying models with over one billion parameters, pre-trained on datasets of up to ten million datapoints. We consider large language models for generative chemistry and graph neural networks for machine-learned interatomic potentials. To enable large-scale scientific deep learning studies under resource constraints, we develop the Training Performance Estimation (TPE) framework to reduce the costs of scalable hyperparameter optimization by up to 90%. Using this framework, we discover empirical neural scaling relations for deep chemical models and investigate the interplay between physical priors and scale. Potential applications of large, pre-trained models for "prompt engineering" and unsupervised representation learning of molecules are shown.

Content

Thumbnail image of NMI_NeuralScaling.pdf

Supplementary weblinks

Litmatter
Rapid experimentation and scaling of deep learning models on molecular and crystal graphs.
ChemGPT Models
Models and tokenizers for ChemGPT, a large language model for chemical generation.
Neural force field models
Equivariant graph neural network model checkpoints for machine-learned interatomic potentials.