Optimal Topology Search for Fast Model Averaging in Decentralized Parallel SGD.
PAKDD (2)(2020)
摘要
Distributed training of deep learning models on high-latency systems necessitates decentralized parallel SGD solutions. However, existing solutions suffer from slow convergence because of hand-crafted topologies. The question arises, “for decentralized parallel SGD, is it possible to learn a topology that provides faster model averaging compared to the hand-crafted counterparts?”.
更多查看译文
关键词
Optimal decentralize topology, Fast model averaging, Parallel stochastic gradient decent, Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要