Optimal Distributed Optimization on Slowly Time-Varying Graphs
arXiv: Optimization and Control(2018)
摘要
We study the convergence rate of first-order optimization algorithms when the objective function is allowed to change from one iteration to another, but its minimizer and optimal value remain the same. This problem is motivated by recent developments in optimal distributed optimization algorithms over networks where computational nodes or agents can experience network malfunctions such as a loss of connection between two nodes. We show an explicit and non-asymptotic linear convergence of the distributed versions of the gradient descent and Nesterovu0027s fast gradient method on strongly convex and smooth objective functions when the network of nodes has a finite number of changes (we will call this network slowly time-varying). Moreover, we show that Nesterov method reaches the optimal iteration complexity of $Omega(sqrt{kappacdotchi(W)}logfrac{1}{varepsilon})$ for decentralized algorithms, where $kappa$ and $chi(W)$ are condition numbers of the objective function and communication graph respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络