Model elasticity for hardware heterogeneity in federated learning systems

Allen-Jasmin Farcas,Xiaohan Chen,Zhangyang Wang大牛学者,Radu Marculescu大牛学者

Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network(2022)

引用 0|浏览3
Most Federated Learning (FL) algorithms proposed to date obtain the global model by aggregating multiple local models that typically share the same architecture, thus overlooking the impact on the hardware heterogeneity of edge devices. To address this issue, we propose a model-architecture co-design framework for FL optimization based on the new concept of model elasticity. More precisely, we enable local devices to train different models belonging to the same architecture family, selected to match the resource budgets (e.g., latency, memory, power) of various edge devices. Our results on EMNIST and CIFAR-10 for both IID and non-IID cases show up to 2.44X less data transferred per communication round and up to 100X reduction in the number of communication rounds, while providing the same or better accuracy compared to existing approaches.
AI 理解论文