A personalized federated learning method based on the residual multi-head attention mechanism
Journal of King Saud University - Computer and Information Sciences(2024)
摘要
Federated Learning (FL) is a distributed machine learning technique for training machine learning models across multiple clients collaboratively. It allows multiple local devices to cooperatively train global models without compromising data privacy or necessitating extensive data transfer. However, the inherent data heterogeneity across clients poses a challenge, as a single global model trained through FL struggles to adapt to the diverse distributions present in individual client data. This discrepancy leads to a marked decline in model accuracy, slows down FL convergence, and even makes FL divergent. To this end, this paper proposes a method termed Residual Attention for Federated Learning (RAFL), aimed at personalized federated learning (PFL) by applying a residual multi-head attention mechanism to enrich personalized feature information. Additionally, it leverages a global category embedding layer to learn global feature information. In order to evaluate our proposed method's effectiveness, we perform extensive experiments on three benchmark datasets, comparing it with eight benchmark methods. The experimental results demonstrate that RAFL has better personalized learning capability compared to the other benchmark methods. On three benchmark datasets, RAFL achieves the highest accuracy. Notably, the accuracy is about two and a half percentage points higher than the current sota PFL methods.
更多查看译文
关键词
Federated Learning,Data heterogeneity,Personalized federated learning,Residual network,Attention mechanisms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要