Abstract
Predicting molecular taste remains a significant challenge in food science. Here, we present FART (Flavor Analysis and Recognition Transformer), a chemical language model fine-tuned on the largest public dataset (15,031 compounds) of molecular tastants to date. When operating within confidence bounds, FART achieves 88.4% accuracy in predicting four fundamental taste categories—sweet, bitter, sour, and umami. Unlike previous approaches focused on binary classification, FART performs multi-class prediction while maintaining interpretability through gradient-based visualization of molecular features. The model identifies key structural elements driving taste properties and demonstrates utility in analyzing both known tastants and novel compounds. By making both the model and dataset publicly available, we provide the food science community with tools for rapid taste prediction, potentially accelerating the development of new flavor compounds and enabling systematic exploration of taste chemistry.
Supplementary materials
Title
Supplementary Information
Description
Supplementary Information including Supplementary Methods, Supplementary Tables and Supplementary Figures.
Actions
Supplementary weblinks
Title
GitHub with Code and Data
Description
Link to GitHub where the code and data used in this work is available.
Actions
View Title
HuggingFace with Code and Data
Description
Link to HugginFace with code available for the transformer models trained in this work. The dataset used in this work is also made available there.
Actions
View