HERTA: A High-Efficiency and Rigorous Training Algorithm for Unfolded Graph Neural Networks
arxiv(2024)
摘要
As a variant of Graph Neural Networks (GNNs), Unfolded GNNs offer enhanced
interpretability and flexibility over traditional designs. Nevertheless, they
still suffer from scalability challenges when it comes to the training cost.
Although many methods have been proposed to address the scalability issues,
they mostly focus on per-iteration efficiency, without worst-case convergence
guarantees. Moreover, those methods typically add components to or modify the
original model, thus possibly breaking the interpretability of Unfolded GNNs.
In this paper, we propose HERTA: a High-Efficiency and Rigorous Training
Algorithm for Unfolded GNNs that accelerates the whole training process,
achieving a nearly-linear time worst-case training guarantee. Crucially, HERTA
converges to the optimum of the original model, thus preserving the
interpretability of Unfolded GNNs. Additionally, as a byproduct of HERTA, we
propose a new spectral sparsification method applicable to normalized and
regularized graph Laplacians that ensures tighter bounds for our algorithm than
existing spectral sparsifiers do. Experiments on real-world datasets verify the
superiority of HERTA as well as its adaptability to various loss functions and
optimizers.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要