HDFL: A Heterogeneity and Client Dropout-Aware Federated Learning Framework

2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)(2023)

引用 0|浏览24
暂无评分
摘要
Cross-device Federated Learning (FL) enables training machine learning (ML) models on private data that is heterogeneously distributed over many IoT end devices without violating privacy requirements. Clients typically vary significantly in data quality, hardware resources and stability, which results in challenges such as increased training times, higher resource costs, sub-par model performance and biased training. Existing works tend to address each of these challenges in isolation, but overlook how they might impact each other holistically. We perform a first of its kind characterization study that empirically demonstrates how these properties interact with each other to impact important performance metrics such as model error, fairness, resource cost and training time. We then propose a method called HDFL based on our observations, which is the first framework to our knowledge that comprehensively considers the multiple aforementioned important challenges of practical FL systems. We implement HDFL on a real distributed system and evaluate it on multiple benchmark datasets which show that HDFL achieves better Pareto frontier compared to both the state-of-the-practice and state-of-the-art systems with up to 4-10% better model accuracy, 33% improved good-intent fairness, 63% lower cost, and 17% faster training time.
更多
查看译文
关键词
federated learning,privacy,deep learning,fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要