Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization

IEEE Transactions on Automatic Control(2022)

引用 14|浏览9
暂无评分
摘要
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first consider a distributed first-order primal–dual algorithm. We show that it converges sublinearly to a stationary point if each local cost function is smooth and linearly to a global optimum under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition for proving linear convergence of distributed optimization algorithms, and the global minimizer is not necessarily unique. Motivated by the situations where the gradients are unavailable, we then propose a distributed zeroth-order algorithm, derived from the considered first-order algorithm by using a deterministic gradient estimator, and show that it has the same convergence properties as the considered first-order algorithm under the same conditions. The theoretical results are illustrated by numerical simulations.
更多
查看译文
关键词
Distributed nonconvex optimization,first-order algorithm,linear convergence,primal–dual algorithm,zeroth-order algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要