Distributed Variance Reduction with Optimal Communication

arXiv e-prints(2020)

引用 0|浏览42
暂无评分
摘要
We consider the problem of distributed variance reduction: $n$ machines each receive probabilistic estimates of an unknown true vector $\Delta$, and must cooperate to find a common estimate of $\Delta$ with lower variance, while minimizing communication. Variance reduction is closely related to the well-studied problem of distributed mean estimation, and is a key procedure in instances of distributed optimization, such as data-parallel stochastic gradient descent. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an output variance bound in terms of this norm. However, in real applications, the input vectors can be concentrated around the true vector $\Delta$, but $\Delta$ itself may have large norm. In this case, output variance bounds in terms of input norm perform poorly, and may even increase variance. In this paper, we show that output variance need not depend on input norm. We provide a method of quantization which allows variance reduction to be performed with solution quality dependent only on input variance, not on input norm, and show an analogous result for mean estimation. This method is effective over a wide range of communication regimes, from sublinear to superlinear in the dimension. We also provide lower bounds showing that in many cases the communication to output variance trade-off is asymptotically optimal. Further, we show experimentally that our method yields improvements for common optimization tasks, when compared to prior approaches to distributed mean estimation.
更多
查看译文
关键词
variance reduction,communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要