Communication-Efficient Federated Learning with Sparsity and Quantization.

Zhong Long,Yuling Chen,Hui Dou,Yun Luo,Chaoyue Tan, Yancheng Sun

2023 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)(2023)

引用 0|浏览0
暂无评分
摘要
Google proposed federated learning in 2016, which aims to solve the problems of data silos and privacy leakage in machine learning. However, as the difficulty of the target task increases and the model performance requirements progressively improve, these have to consume more communication costs to increase the model parameters. In order to solve this problem, this paper proposes the Federated Learning with Sparsity and Quantization(FedSQ) method, where we select the gradients with a large rate of change in the gradients of the participants in each round for transmission, and utilize binary compression to quantize the selected model parameters, while adopting a suitable aggregation algorithm to replace the original FedAvg, so that it does not excessively impair the model accuracy in the case of high-fold compression. Finally, we verify the effectiveness of the scheme proposed in this paper, which can improve the quality of model training under model compression.
更多
查看译文
关键词
federated learning,distributed machine learning,gradient compression,model quantization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要