Learning by Transference: Training Graph Neural Networks on Growing Graphs

arxiv(2023)

引用 4|浏览7
暂无评分
摘要
Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful feature representations from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon-the limit object of a graph-in this paper we consider the problem of learning a graphon neural network (WNN)-the limit object of a GNN-by training GNNs on graphs sampled from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is further benchmarked on a recommendation system and a decentralized control problem, where it retains comparable performance to its large-scale counterpart at a reduced computational cost.
更多
查看译文
关键词
Graph neural networks,graphons,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要