A Survey of Molecular Representation Learning: From Single Modalities to Foundation Models

13 May 2025, Version 1
This content is a preprint and has not undergone peer review at the time of posting.

Abstract

Molecular representation learning (MRL) has recently emerged as a fundamental domain in cheminformatics. It aims to replace traditional handcrafted molecular descriptors with machine-learned representations derived from raw chemical data. This survey presents a comprehensive overview of MRL approaches, outlining the evolution from unimodal methods—such as graph, string, and image-based encoders—to recent multimodal frameworks that integrate several molecular data types, including structural, textual, and experimental inputs. We categorize existing multimodal methodologies based on their integration strategies—alignment, translation, and fusion—and examine their training strategies. These models are discussed in light of the emerging concept of chemical foundation models, which seek to unify multiple chemical modalities through large-scale self-supervised learning, to enable the creation of robust, transferable representations applicable across a wide range of chemical tasks. We conclude by identifying the defining characteristics of chemical foundation models, reviewing recent efforts in this developing field, and outlining future directions toward the creation of a universal chemical foundation model.

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.