MSCCL: Microsoft Collective Communication Library

ArXiv(2022)

引用 0|浏览27
暂无评分
摘要
Machine learning models made up of millions or billions of parameters are often trained and served on large multi-GPU systems. As models grow in size and execute on more GPUs, the collective communications used in these applications becomes a bottleneck. Custom collective algorithms optimized for both particular network topologies and application specific communication patterns can alleviate this bottleneck and thus help these applications scale. This paper introduces MSCCL, a system designed to make GPU communication programmable. MSCCL provides a data oriented domain specific language for writing custom collective communication algorithms and an optimizing compiler for lowering them to an executable form, which can be executed efficiently and flexibly in an interpreter based runtime. We used MSCCL to write novel collective implementations for AllReduce and AllToAll that are up to 48% and 20% faster than optimized vendor implementations, respectively. We also demonstrate how directly implementing an application specific collective called AllToNext in MSCCL results in a 14.5× speedup over the baseline.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要