Distributed Optimization With Personalization: A Flexible Algorithmic Framework

IEEE Transactions on Automatic Control(2024)

引用 0|浏览3
暂无评分
摘要
We consider a distributed personalized optimization problem over a network of nodes whose cost functions depend on a decision variable consisting of two parts: global (shared) part and local (node-specific) part. This problem structure arises in several important scenarios where the global part captures the common feature among nodes and the local part represents the personalized feature. To solve this problem, we develop a flexible algorithmic framework employing personalized distributed stochastic gradient tracking method, where each node updates variables locally and communicates the shared part with its neighbors for coordination in a stochastic way. Different from existing works, the proposed framework provides great flexibility in designing algorithms with uncoordinated stepsizes for updating local and global parts over changing topologies. Most importantly, it can eliminate the effect of heterogeneity among nodes thanks to the gradient tracking scheme. Leveraging a properly designed Lyapunov function, we prove the convergence of the proposed algorithm for both (strongly) convex and non-convex objective functions with cross-block Lipschitz gradients. The obtained rate results show clear dependence of the convergence performance on topology, heterogeneity of stepsize and properties of objective functions. Numerical examples are provided to demonstrate the effectiveness of the proposed algorithm accounting for node-specific heterogeneity.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要