Sketch-fusion: A gradient compression method with multi-layer fusion for communication-efficient distributed training

Lingfei Dai, Luqi Gong,Zhulin An,Yongjun Xu,Boyu Diao

JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING(2024)

引用 0|浏览2
暂无评分
摘要
Gradient compression is an effective technique for improving the efficiency of distributed training. However, introducing gradient compression can reduce model accuracy and training efficiency. Furthermore, we also find that using a layer-wise gradient compression algorithm would lead to significant compression and communication overhead, which can negatively impact the scaling efficiency of the distributed training system. To address these issues, we propose a new method called Sketch-Fusion SGD, which leverages the Count-Sketch data structure to enhance the scalability and training speed of distributed deep learning systems. Moreover, our method employs LayerFusion to optimize gradient compression algorithms' scalability and convergence efficiency by formulating an optimal multi-layer fusion strategy without introducing extra hyperparameters. We evaluate our method on a cluster of 16 GPUs and demonstrate that it can improve training efficiency by up to 18.6% without compromising the model's accuracy. In addition, we find that applying our LayerFusion algorithm to other gradient compression methods improved their scalability by up to 2.87x.
更多
查看译文
关键词
Gradient compression,Multi-layer fusion,Distributed stochastic gradient descent,Deep learning training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要