Random Sampling Versus Active Learning Algorithms for Machine Learning Potentials of Quantum Liquid Water
Journal of chemical theory and computation(2025)SCI 1区SCI 2区
Ruhr Univ Bochum
Abstract
Training accurate machine learning potentials requires electronic structure data comprehensively covering the configurational space of the system of interest. As the construction of this data is computationally demanding, many schemes for identifying the most important structures have been proposed. Here, we compare the performance of high-dimensional neural network potentials (HDNNPs) for quantum liquid water at ambient conditions trained to data sets constructed using random sampling as well as various flavors of active learning based on query by committee. Contrary to the common understanding of active learning, we find that for a given data set size, random sampling leads to smaller test errors for structures not included in the training process. In our analysis we show that this can be related to small energy offsets caused by a bias in structures added in active learning, which can be overcome by using instead energy correlations as an error measure that is invariant to such shifts. Still, all HDNNPs yield very similar and accurate structural properties of quantum liquid water, which demonstrates the robustness of the training procedure with respect to the training set construction algorithm even when trained to as few as 200 structures. However, we find that for active learning based on preliminary potentials, a reasonable initial data set is important to avoid an unnecessary extension of the covered configuration space to less relevant regions.
MoreTranslated text
PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2005
被引用5750 | 浏览
2010
被引用53580 | 浏览
2007
被引用4278 | 浏览
2012
被引用136 | 浏览
2017
被引用121 | 浏览
2017
被引用66 | 浏览
2020
被引用105 | 浏览
2021
被引用97 | 浏览
2024
被引用6 | 浏览
To Pair or Not to Pair? Machine-Learned Explicitly-Correlated Electronic Structure for NaCl in Water
2024
被引用8 | 浏览
2024
被引用9 | 浏览
2024
被引用12 | 浏览
2024
被引用3 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话