Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation

IEEE Transactions on Machine Learning in Communications and Networking(2024)

引用 0|浏览3
暂无评分
摘要
Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.
更多
查看译文
关键词
decentralized learning,federated learning,multi-cell scenario
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要