Distributed Algorithms For Composite Optimization: Unified Framework And Convergence Analysis

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2021)

引用 63|浏览19
暂无评分
摘要
We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general unified algorithmic framework for such a class of problems and provide a convergence analysis leveraging the theory of operator splitting. Distinguishing features of our scheme are: (i) When each of the agent's functions is strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled; (ii) When the objective function is convex (but not strongly convex), similar decoupling as in (i) is established for the coefficient of the proved sublinear rate. This also reveals the role of function heterogeneity on the convergence rate. (iii) The algorithm can adjust the ratio between the number of communications and computations to achieve a rate (in terms of computations) independent on the network connectivity; and (iv) A by-product of our analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms yielding provably faster (worst-case) convergence rate for the class of problems under consideration.
更多
查看译文
关键词
Convergence, Signal processing algorithms, Distributed algorithms, Tuning, Network topology, Electronic mail, Sun, Distributed optimization, linear convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要