Communication-Censored Distributed Stochastic Gradient Descent
IEEE Transactions on Neural Networks and Learning Systems(2022)
摘要
This article develops a communication-efficient algorithm to solve the stochastic optimization problem defined over a distributed network, aiming at reducing the burdensome communication in applications, such as distributed machine learning. Different from the existing works based on quantization and sparsification, we introduce a communication-censoring technique to reduce the transmissions of variables, which leads to our communication-censored distributed stochastic gradient descent (CSGD) algorithm. Specifically, in CSGD, the latest minibatch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative. When the latest gradient is not available, the stale one will be reused at the server. To implement this communication-censoring strategy, the batch size is increasing in order to alleviate the effect of stochastic gradient noise. Theoretically, CSGD enjoys the same order of convergence rate as that of SGD but effectively reduces communication. Numerical experiments demonstrate the sizable communication saving of CSGD.
更多查看译文
关键词
Communication censoring,communication efficiency,distributed optimization,stochastic gradient descent (SGD)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络