Abstract
Machine learning (ML) has demonstrated its potential in atomistic simulations to bridge the gap between accurate first-principles methods and computationally efficient empirical potentials. This is achieved by learning mappings between a system's structure and its physical properties. State-of-the-art models for potential energy surfaces typically represent chemical structures through (semi-)local atomic environments. However, this approach neglects long-range interactions (most notably electrostatics) and non-local phenomena such as charge transfer, leading to significant errors in the description of molecules or materials in polar anisotropic environments. To address these challenges, ML frameworks that predict self-consistent charge distributions in atomistic systems using the Charge Equilibration (QEq) method are currently popular. In this approach, atomic charges are derived from an electrostatic energy expression that incorporates environment-dependent atomic electronegativities. Herein, we explore the limits of this concept at the example of the previously reported Kernel Charge Equilibration (kQEq) approach, combined with local short-ranged potentials. To this end we consider prototypical systems with varying total charge states and applied electric fields. We find that charge equilibration based models perform well in most situations. However, we also find that some pathologies of conventional QEq carry over to the ML variants in the form of spurious charge transfer and overpolarization in the presence of static electric fields. This indicates a need for new methodological developments.