Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks

2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD(2023)

引用 0|浏览37
暂无评分
摘要
Graph Convolutional Networks (GCNs) are pivotal in extracting latent information from graph data across various domains, yet their acceleration on mainstream GPUs is challenged by workload imbalance and memory access irregularity. To address these challenges, we present Accel-GCN, a GPU accelerator architecture for GCNs. The design of Accel-GCN encompasses: (i) a lightweight degree sorting stage to group nodes with similar degree; (ii) a block-level partition strategy that dynamically adjusts warp workload sizes, enhancing shared memory locality and workload balance, and reducing metadata overhead compared to designs like GNNAdvisor; (iii) a combined warp strategy that improves memory coalescing and computational parallelism in the column dimension of dense matrices. Utilizing these principles, we formulate a kernel for SpMM in GCNs that employs block-level partitioning and combined warp strategy. This approach augments performance and multilevel memory efficiency and optimizes memory bandwidth by exploiting memory coalescing and alignment. Evaluation of AccelGCN across 18 benchmark graphs reveals that it outperforms cuSPARSE, GNNAdvisor, and graph-BLAST by factors of 1.17x, 1.86x, and 2.94x respectively. The results underscore AccelGCN as an effective solution for enhancing GCN computational efficiency. The implementation can be found on Github*.
更多
查看译文
关键词
Graph Convolution Network,sparse matrix multiplication (SpMM),parallel computing,GPUs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要