Mitigating System Bias in Resource Constrained Asynchronous Federated Learning Systems
CoRR(2024)
摘要
Federated learning (FL) systems face performance challenges in dealing with
heterogeneous devices and non-identically distributed data across clients. We
propose a dynamic global model aggregation method within Asynchronous Federated
Learning (AFL) deployments to address these issues. Our aggregation method
scores and adjusts the weighting of client model updates based on their upload
frequency to accommodate differences in device capabilities. Additionally, we
also immediately provide an updated global model to clients after they upload
their local models to reduce idle time and improve training efficiency. We
evaluate our approach within an AFL deployment consisting of 10 simulated
clients with heterogeneous compute constraints and non-IID data. The simulation
results, using the FashionMNIST dataset, demonstrate over 10
improvement in global model accuracy compared to state-of-the-art methods
PAPAYA and FedAsync, respectively. Our dynamic aggregation method allows
reliable global model training despite limiting client resources and
statistical data heterogeneity. This improves robustness and scalability for
real-world FL deployments.
更多查看译文
关键词
Machine Learning,Federated Learning,Scalability,Resource-constrained Devices,System Bias,Device Heterogeneity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要