Federated Learning via Attentive Margin of Semantic Feature Representations

IEEE Internet of Things Journal(2023)

引用 4|浏览13
暂无评分
摘要
Federated learning (FL) in Internet of Things (IoT) systems enables distributed model training using a large corpus of decentralized training data dispersed among multiple IoT clients. In this distributed setting, system and statistical heterogeneity, in the form of highly imbalanced, and nonindependent and identically distributed (non-i.i.d.) data stored on multiple devices, are likely to hinder model training. Existing methods aggregate models disregarding the internal representations being learned, which yet play an essential role to solve the pursued task, especially in the case of deep learning modules. To leverage feature representations in an FL framework, we introduce a method, called FedMargin, which computes client deviations using margins over feature representations learned on distributed data, and applies them to drive federated optimization via an attention mechanism. Local and aggregated margins are jointly exploited, taking into account local representation shift and representation discrepancy with the global model. In addition, we propose three methods to analyse statistical properties of feature representations learned in FL, in order to elucidate the relationship between accuracy, margins, and feature discrepancy of FL models. In experimental analyses, FedMargin demonstrates state-of-the-art accuracy and convergence rate across image classification and semantic segmentation benchmarks by enabling maximum margin training of FL models. Moreover, FedMargin reduces the uncertainty of predictions of FL models compared to the baseline. In this work, we also evaluate FL models on dense prediction tasks, such as semantic segmentation, proving the versatility of the proposed approach.
更多
查看译文
关键词
Feature representations,federated learning (FL),Internet of Things (IoT),semantic segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要