Primal-Dual Stochastic Distributed Algorithm for Constrained Convex Optimization

Journal of the Franklin Institute(2019)

引用 15|浏览37
暂无评分
摘要
This paper investigates distributed convex optimization problems over an undirected and connected network, where each node’s variable lies in a private constrained convex set, and overall nodes aim at collectively minimizing the sum of all local objective functions. Motivated by a variety of applications in machine learning problems with large-scale training sets distributed to multiple autonomous nodes, each local objective function is further designed as the average of moderate number of local instantaneous functions. Each local objective function and constrained set cannot be shared with others. A primal-dual stochastic algorithm is presented to address the distributed convex optimization problems, where each node updates its state by resorting to unbiased stochastic averaging gradients and projects on its private constrained set. At each iteration, for each node the gradient of one local instantaneous function selected randomly is evaluated and the average of the most recent stochastic gradients is used to approximate the true local gradient. In the constrained case, we show that with strong-convexity of the local instantaneous function and Lipschitz continuity of its gradient, the algorithm converges to the global optimization solution almost surely. In the unconstrained case, an explicit linear convergence rate of the algorithm is provided. Numerical experiments are presented to demonstrate correctness of the theoretical results.
更多
查看译文
关键词
Constrained convex optimization,Machine learning,Primal-dual algorithm,Stochastic averaging gradients,Linear convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要