A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks

ICLR(2021)

引用 78|浏览379
暂无评分
摘要
In this paper, we derived generalization bounds for two primary classes of graph neural networks (GNNs), namely graph convolutional networks (GCNs) and message passing GNNs, via a PAC-Bayesian approach. Specifically, our result reveals that the maximum node degree and spectral norm of the weights govern the generalization bound. Importantly, our bound is a natural generalization of the results developed in \\cite{neyshabur2017pac} for fully-connected and convolutional neural networks. For message passing GNNs, our PAC-Bayes bound improves over the Rademacher complexity based bound in \\cite{garg2020generalization}, showing a tighter dependency on the maximum node degree and the maximum hidden dimension. The key ingredients of our proof is a perturbation analysis of GNNs and the generalization of PAC-Bayes analysis to non-homogeneous GNNs. We perform an empirical study on several real-world graph datasets and verify that our PAC-Bayes bound is tighter than others.
更多
查看译文
关键词
generalization bounds,neural networks,graph,pac-bayesian
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要