Abstract
Deep learning (DL) in chemistry has made significant progress, yet its applicability is limited by the scarcity of large, labeled datasets and the difficulty of extracting meaningful molecular features. Recently, molecular representation learning (MRL) has emerged as a powerful approach to these challenges by decoupling feature extraction and property prediction. In MRL, a deep network is first trained to learn molecular features from large, unlabeled datasets and then finetuned for property prediction in smaller, specialized domains. The advent of foundation models, which are large models trained on diverse datasets capable of addressing various downstream tasks, has also transformed the field of DL. For example, large language models (LLMs) like OpenAI’s GPT-4 can be finetuned with minimal additional data for tasks considerably different from their training. While MRL methods have been widely applied across chemical applications, these models are typically trained from scratch on molecular data. This study proposes that foundation models can serve as an advantageous starting point for developing MRL models. We explore this idea by leveraging OpenAI's CLIP vision foundation model as the backbone for MoleCLIP, a molecular image representation learning framework. On standard benchmarks, MoleCLIP requires significantly less molecular pretraining data to match the performance of state-of-the-art models. Furthermore, MoleCLIP outperformed existing models on homogeneous catalysis datasets, emphasizing its robustness to distribution shifts, which allows it to adapt effectively to varied tasks and datasets. This successful application of a general foundation model to chemical tasks highlights the potential of innovations in DL research to advance synthetic chemistry and, more broadly, any field where molecular property description is central to discovery.