We present a model-agnostic method that gives natural language explanations of molecular structure property predictions. Machine learning models are now common for molecular property prediction and chemical design. They typically are black boxes -- having no explanation for predictions. We show how to use surrogate models to attribute predictions to chemical descriptors and molecular substructures, independent of the black box model inputs. The method generates explanations consistent with chemical reasoning, like connecting existence of a functional group or molecular polarity. We see in a genuine test like blood brain barrier permeation, our descriptor explanations match biologically observed SARs with mechanistic support. We show these quantitative explanations can be further translated to natural language.
Added natural language explanations and revised title, figures to reflect new explanation type.