Robust Aggregation Technique Against Poisoning Attacks in Multi-Stage Federated Learning Applications.

Consumer Communications and Networking Conference(2024)

引用 0|浏览0
暂无评分
摘要
Federated Learning (FL) is a distributed Machine Learning (ML) technique that allows model training without sharing data. FL is vulnerable to poisoning attacks where an adversary manipulates the learning process by providing false information to the federation. Ensuring security in FL is vital before using FL in real applications, as the consequences can be adverse. Multi-stage FL is a novel variant of FL that performs intermediate model aggregations, thereby reducing the traffic toward the FL central server. The existing robust aggregation techniques are insufficient in multi-stage FL systems. This paper proposes a novel robust aggregation algorithm against poisoning attacks in a three-layer multi-stage FL system that consists of device, edge, and cloud layers. We evaluate the proposed robust algorithm considering an Augmented Reality (AR) application with different poisoner placements and attack strategies. The evaluation results show that the proposed algorithm can effectively defend against poisoning attacks in three-layer multi-stage FL systems.
更多
查看译文
关键词
Federated Learning,Poisoning Attacks,Augmented Reality,Multi-stage FL
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要