N-FedAvg: Novel Federated Average Algorithm Based on FedAvg

2022 14th International Conference on Communication Software and Networks (ICCSN)(2022)

引用 3|浏览6
暂无评分
摘要
As a safe and efficient distributed machine learning technology, federated learning has been widely used in all walks of life. However, various problems in the traditional FedAvg algorithm have brought a lot of inconvenience to the implementation of federated learning in related industries. Therefore, this paper addresses related problems by proposing a novel federated learning algorithm (N-FedAvg) based on FedAvg. N-FedAvg selects clients in sequence before each epoch, reducing the randomness when selecting clients, so that all clients' data can have the opportunity to participate in federated learning, and prevent local data of some clients from participating in the federation with a low probability. Also, when using gradient descent to optimize the objective function and approaching the global minimum of the loss function value, the learning rate should be made smaller to get the model as close as possible to this point, and cosine annealing can be achieved by the cosine function and produces good results. Finally, the sparsity strategy is essentially a model compression method, which not only transmits a small number of parameters, but also reduces the network bandwidth between the server and the clients, and can also prevent the leakage of global model parameters. Through comparative experimental analysis, this paper finds that the N-FedAvg algorithm proposed by us is 1.34% more accurate than the traditional FedAvg algorithm on the CIFAR-10 dataset, and the loss function value is 2.77% lower.
更多
查看译文
关键词
Federated Learning,Client Selection,Dynamic Learning Rate,Model Compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要