Graph neural networks meet with distributed graph partitioners and reconciliations

Neurocomputing(2023)

引用 2|浏览24
暂无评分
摘要
Graph neural networks (GNNs) have shown great success in various applications. As real-world graphs are large, training GNNs in distributed systems is desirable. In current training schemes, their edge partitioning strategies have a strong impact on the performance of GNNs for the unbalanced influence of high-degree nodes and the damaged neighbor integrity of low-degree nodes. Meanwhile, a lack of reconciliations of different local models leads to converging up and down across workers. In this work, we design DEPR, a suitable framework for distributed GNN training. We propose a degree-sensitive edge partitioning with influence-balancing and locality-preserving to adapt distributed GNNs training by following an owner-compute rule (each partition performs all the computations involving data that it owns). And then knowledge distillation and contrastive learning are used to reconcile the fusion of local models and boost convergence. We show in extensive empirical experiments on the node classification task of three large-scale graph datasets (Reddit, Amazon, and OGB-Products) that DEPR achieves 2x speedup of convergence and get absolute up 3.97 performance improvement of F1-micro score compared to DistDGL.
更多
查看译文
关键词
Distributed GNNs,Graph Partitioning,Knowledge Distillation,Graph Contrastive Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要