Abstract
Active learning requires massive time for comprehensive sampling of complex potential energy surfaces (PESs) to achieve desirable accuracy and stability of machine learning (ML) potentials. Here we develop an active delta-learning (ADL) protocol for speeding up active learning and building delta-learning models stable in simulations. Our results show that ADL needs ca. ten times fewer sampled points and iterations to obtain models of the same accuracy as without delta-learning. The crucial advantage of the models built with the delta-learning protocol is their remarkable simulation stability: even models from the initial active learning iterations yield reasonable results. In contrast, the pure ML potentials built without delta-learning often lead to the collapse in simulations, i.e., to unphysical structures.