Sublinear and Linear Convergence of Modified ADMM for Distributed Nonconvex Optimization.

IEEE Trans. Control. Netw. Syst.(2023)

引用 1|浏览1
暂无评分
摘要
In this article, we consider distributed nonconvex optimization over an undirected connected network. Each agent can only access to its own local nonconvex cost function and all agents collaborate to minimize the sum of these functions by using local information exchange. We first propose a modified alternating direction method of multipliers (ADMM) algorithm. We show that the proposed algorithm converges to a stationary point with the sublinear rate $\mathcal {O}(1/T)$ if each local cost function is smooth and the algorithm parameters are chosen appropriately. We also show that the proposed algorithm linearly converges to a global optimum under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition, which is weaker than the commonly used conditions for showing linear convergence rates including strong convexity. We then propose a distributed linearized ADMM (L-ADMM) algorithm, derived from the modified ADMM algorithm, by linearizing the local cost function at each iteration. We show that the L-ADMM algorithm has the same convergence properties as the modified ADMM algorithm under the same conditions. Numerical simulations are included to verify the correctness and efficiency of the proposed algorithms.
更多
查看译文
关键词
Alternating direction method of multipliers (ADMM),distributed optimization,linear convergence,linearized ADMM,Polyak–Łojasiewicz condition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要