Decentralized multi-task stochastic optimization with compressed communications

AUTOMATICA(2024)

引用 1|浏览1
暂无评分
摘要
We consider a multi-agent network where each node has a stochastic (local) cost function that depends on the decision variable of that node and a random variable, and further, the decision variables of neighboring nodes are pairwise constrained. There is an aggregated objective function for the network, composed additively of the expected values of the local cost functions at the nodes, and the overall goal of the network is to obtain the minimizing solution to this aggregate objective function subject to all the pairwise constraints. This is to be achieved at the level of the nodes using decentralized information and local computation, with exchanges of only compressed information allowed by neighboring nodes. The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node. For both models, with compressed communication between neighbors, we have developed decentralized saddle-point algorithms that deliver performances no different (in order sense) from those without communication compression; specifically, we show that deviation from the global minimum value and violations of the constraints are upper-bounded by O(T- 2 ) and O(T- 1 1 4 ), respectively, where T is the number of iterations. Numerical examples provided in the paper corroborate these bounds.(c) 2023 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Decentralized stochastic optimization,Saddle-point algorithm,Compression,Sample feedback,Bandit feedback
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要