Robust Asynchronous Federated Learning with Time-weighted and Stale Model Aggregation

IEEE Transactions on Dependable and Secure Computing(2023)

引用 0|浏览5
暂无评分
摘要
Federated Learning (FL) ensures collaborative learning among multiple clients while maintaining data locally. However, the traditional synchronous FL solutions have lower accuracy and require more communication time in scenarios where most devices drop out during learning. Therefore, we propose an Asy nchronous F ederated L earning (AsyFL) scheme using time-weighted and stale model aggregation, which effectively solves the problem of poor model performance due to the heterogeneity of devices. Then, we integrate Symmetric Homomorphic Encryption (SHE) into AsyFL to propose Asy nchronous P rivacy- P reserving F ederated L earning (Asy-PPFL), which protects the privacy of clients and achieves lightweight computing. Privacy analysis shows that Asy-PPFL is indistinguishable under Known Plaintext Attack (KPA) and convergence analysis proves the effectiveness of our schemes. A large number of experiments show that AsyFL and Asy-PPFL can achieve the highest accuracy of 58.40% and 58.26% on Cifar-10 dataset when most clients (i.e., 80%) are offline or delayed, respectively.
更多
查看译文
关键词
Federated learning,heterogeneity,symmetric homomorphic encryption,privacy,lightweight computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要