FedMDS: An Efficient Model Discrepancy-Aware Semi-Asynchronous Clustered Federated Learning Framework

IEEE Transactions on Parallel and Distributed Systems(2023)

引用 11|浏览167
暂无评分
摘要
Federated learning (FL) is an emerging distributed machine learning paradigm that protects privacy and tackles the problem of isolated data islands. At present, there are two main communication strategies of FL: synchronous FL and asynchronous FL. The advantages of synchronous FL are the high precision and easy convergence of the model. However, this synchronous communication strategy has the risk of the straggler effect. Asynchronous FL has a natural advantage in mitigating the straggler effect, but there are threats of model quality degradation and server crash. In this paper, we propose a model discrepancy-aware semi-asynchronous clustered FL framework, FedMDS , which alleviates the straggler effect by 1) a clustered strategy based on the delay and direction of the model update and 2) a synchronous trigger mechanism that limits the model staleness. FedMDS leverages the clustered algorithm to reschedule the clients. Each group of clients performs asynchronous updates until the synchronous update mechanism based on the model discrepancy is triggered. We evaluate FedMDS based on four typical federated datasets in a non-IID setting and compare FedMDS to the baselines. The experimental results show that FedMDS significantly improves average test accuracy by more than $+9.2\%$ on the four datasets compared to TA-FedAvg . In particular, FedMDS improves absolute Top-1 test accuracy by $+37.6\%$ on FEMNIST compared to TA-FedAvg . The frequency of the average synchronization waiting time of FedMDS is significantly lower than that of TA-FedAvg on all datasets. Moreover, FedMDS can improve the accuracy and alleviate the straggler effect.
更多
查看译文
关键词
Distributed machine learning,federated learning,neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要