Model elasticity for hardware heterogeneity in federated learning systems
Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network(2022)
摘要
Most Federated Learning (FL) algorithms proposed to date obtain the global model by aggregating multiple local models that typically share the same architecture, thus overlooking the impact on the hardware heterogeneity of edge devices. To address this issue, we propose a model-architecture co-design framework for FL optimization based on the new concept of model elasticity. More precisely, we enable local devices to train different models belonging to the same architecture family, selected to match the resource budgets (e.g., latency, memory, power) of various edge devices. Our results on EMNIST and CIFAR-10 for both IID and non-IID cases show up to 2.44X less data transferred per communication round and up to 100X reduction in the number of communication rounds, while providing the same or better accuracy compared to existing approaches.
更多查看译文
AI 理解论文
溯源树
样例

生成溯源树,研究论文发展脉络