Hybrid Distributed Optimization for Learning Over Networks With Heterogeneous Agents.

IEEE Access(2023)

引用 0|浏览6
暂无评分
摘要
This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a subset of the agents may run computationally-intensive learning algorithms like Newton's method or full gradient descent, while the other agents can only run lower-complexity algorithms like stochastic gradient descent. This leads to opportunities for designing hybrid distributed optimization algorithms that rely on cooperation among the network agents in order to enhance overall performance, improve the rate of convergence, and reduce the communication overhead. We show in this work that hybrid learning with cooperation among heterogeneous agents attains a stable solution. For small step-sizes mu, the proposed approach leads to small estimation error in the order of O(mu). We also provide the theoretical analysis of the stability of the first, second, and fourth order error moments for learning over networks with heterogeneous agents. Finally, results are presented and analyzed for case study scenarios to demonstrate the effectiveness of the proposed approach.
更多
查看译文
关键词
Distributed optimization, diffusion strategy, gradient descent, heterogeneous networks, Newton's method, stochastic gradient descent, stability analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要