Self-Consistency Training for Hamiltonian Prediction
arxiv(2024)
摘要
Hamiltonian prediction is a versatile formulation to leverage machine
learning for solving molecular science problems. Yet, its applicability is
limited by insufficient labeled data for training. In this work, we highlight
that Hamiltonian prediction possesses a self-consistency principle, based on
which we propose an exact training method that does not require labeled data.
This merit addresses the data scarcity difficulty, and distinguishes the task
from other property prediction formulations with unique benefits: (1)
self-consistency training enables the model to be trained on a large amount of
unlabeled data, hence substantially enhances generalization; (2)
self-consistency training is more efficient than labeling data with DFT for
supervised training, since it is an amortization of DFT calculation over a set
of molecular structures. We empirically demonstrate the better generalization
in data-scarce and out-of-distribution scenarios, and the better efficiency
from the amortization. These benefits push forward the applicability of
Hamiltonian prediction to an ever larger scale.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要