Communication Efficient Distributed Optimization using an Approximate Newton-type Method

ICML'14: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32(2014)

引用 612|浏览146
暂无评分
摘要
We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emph{improves} with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要