Edge convolutional networks: Decomposing graph convolutional networks for stochastic training with independent edges

Neurocomputing(2023)

引用 0|浏览21
暂无评分
摘要
After the success of Graph Convolutional Networks (GCN), many stochastic training methods have been proposed to resolve the scalability and efficiency issues of GCN by sampling. In mini-batch training, a common phase of these methods is to form a small-scale subgraph rooting in the given batch. The sub-graph formation leads to heavy time consumption, additional space occupation, and complex implemen-tation. To rectify these issues, we eliminate the subgraph formation phase and propose Edge Convolutional Network (ECN), which is trained with independently sampled edges. It has constant time complexity for sampling, reducing the sampling time by orders of magnitude without compromising con-vergence speed. Specifically, when there are two convolutional layers, as in the most common situation, GCN can also be trained with the techniques behind ECN, gaining substantial sampling time reduction without trade-offs. We prove that the expressiveness difference between ECN and GCN is theoretically bounded and examine the inference performance of ECN through excessive experiments on real-world, large-scale graphs. Furthermore, we improve ECN with advanced mechanisms of GCN, including skip con-nection, identity mapping, embedding, and attention. With proper mechanisms integrated, ECN rivals state-of-the-art (SotA) baselines in inductive node classification and produces new SotA accuracy on the dataset of Flickr. The code is available athttps://github.com/cf020031308/ECN.& COPY; 2023 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Graph convolutional network,Stochastic training,Sampling,Attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要