Expanding the Reach of Federated Learning by Reducing Client Resource Requirements.

arXiv: Learning(2018)

引用 424|浏览276
暂无评分
摘要
Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a $14times$ reduction in server-to-client communication, a $1.7times$ reduction in local computation, and a $28times$ reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FLu0027s impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要