Federated learning with t 1 regularization

PATTERN RECOGNITION LETTERS(2023)

引用 0|浏览12
暂无评分
摘要
Federated Learning (FL) is a widely adopted deep learning method that does not require the collection of raw training data and solves specific learning tasks by federating distributed devices. Due to the het-erogeneous distribution of data across clients, the clients will drift toward the local optimal solutions during local training and result in different local models. The global model after aggregating these differ-ent local models may keep away from the global optimal solution. This phenomenon is known as client drift, which often hinders the performance of FL. Parameter regularization methods address this chal-lenge of client drift by controlling the update direction of each client. They consider the global model as both the starting point and a reference for the induction bias in the penalty. However, the exist-ing regularization approaches produce dense solutions so that all parameters need to be updated dur-ing local model training. At the same time, we note that some studies on deep learning have found that it is unnecessary to update all parameters at each round. Therefore, in this work, we design a novel FL training approach called Fed �1 which can alleviate the performance degradation of FL by up-dating only part of the parameters at each round. e 1 regularization is utilized to control the update di-rection of each client and avoid unnecessary parameter updates at the same time. To our knowledge, our study is the first to introduce sparse regularization term to correct the local training of individual clients in FL. We design a stochastic subgradient descent algorithm to train the t1-regularized nonsmooth model. The comparison experiments with state-of-the-art baselines verify the superiority of the proposed approach. & COPY; 2023 Published by Elsevier B.V.
更多
查看译文
关键词
Federated learning, Z 1 Regularization, Stochastic subgradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要