Accelerating Federated Learning Via Parameter Selection and Pre-Synchronization in Mobile Edge-Cloud Networks

IEEE Transactions on Mobile Computing(2024)

引用 0|浏览2
暂无评分
摘要
Federated learning (FL) is a distributed machine learning methodology that can achieve collaborative model training among clients without collecting their private training data. Despite the great benefits in privacy protection, FL still faces challenges like limited computation capabilities of clients (e.g., end devices) and significant communication overheads when applied to mobile edge-cloud networks. To address these issues, this paper proposes a novel three-layer FL framework with Parameter Selection and Pre-synchronization (PSPFL) to achieve fast and accurate model training in mobile edge-cloud networks. The basic idea of PSPFL is that clients can select partial model parameters for transmission. Then, base stations aggregate these model parameters cooperatively (i.e., pre-synchronization) and send the aggregated results to the server for global model update periodically. However, there is an intrinsic trade-off between parameter transmission overhead and model training loss. To strike a desirable balance between them, we investigate the optimal parameter pre-synchronization round and local training round under PSPFL. Specifically, we propose an Alternating Minimization (AM) algorithm to obtain the initial local training round and parameter pre-synchronization round. Moreover, we integrate Deep Q-network with AM (namely DQNAM) to explore and update the optimal solution. Finally, extensive simulations are conducted to evaluate the performance of the proposed method on commonly used datasets. The results show that the proposed method can reduce the sum of FL completion time and training loss by an average of 20.72% - 69.25% compared to benchmark methods.
更多
查看译文
关键词
Deep q-network,federated learning,model parameter pre-synchronization,parameter selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要