RDMA-Based Algorithms for Sparse Matrix Multiplication on GPUs
CoRR(2023)
摘要
Sparse matrix multiplication is an important kernel for large-scale graph
processing and other data-intensive applications. In this paper, we implement
various asynchronous, RDMA-based sparse times dense (SpMM) and sparse times
sparse (SpGEMM) algorithms, evaluating their performance running in a
distributed memory setting on GPUs. Our RDMA-based implementations use the
NVSHMEM communication library for direct, asynchronous one-sided communication
between GPUs. We compare our asynchronous implementations to state-of-the-art
bulk synchronous GPU libraries as well as a CUDA-aware MPI implementation of
the SUMMA algorithm. We find that asynchronous RDMA-based implementations are
able to offer favorable performance compared to bulk synchronous
implementations, while also allowing for the straightforward implementation of
novel work stealing algorithms.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要