Can We Trust the Similarity Measurement in Federated Learning?

CoRR(2023)

引用 0|浏览0
暂无评分
摘要
Is it secure to measure the reliability of local models by similarity in federated learning (FL)? This paper delves into an unexplored security threat concerning applying similarity metrics, such as the L_2 norm, Euclidean distance, and cosine similarity, in protecting FL. We first uncover the deficiencies of similarity metrics that high-dimensional local models, including benign and poisoned models, may be evaluated to have the same similarity while being significantly different in the parameter values. We then leverage this finding to devise a novel untargeted model poisoning attack, Faker, which launches the attack by simultaneously maximizing the evaluated similarity of the poisoned local model and the difference in the parameter values. Experimental results based on seven datasets and eight defenses show that Faker outperforms the state-of-the-art benchmark attacks by 1.1-9.0X in reducing accuracy and 1.2-8.0X in saving time cost, which even holds for the case of a single malicious client with limited knowledge about the FL system. Moreover, Faker can degrade the performance of the global model by attacking only once. We also preliminarily explore extending Faker to other attacks, such as backdoor attacks and Sybil attacks. Lastly, we provide a model evaluation strategy, called the similarity of partial parameters (SPP), to defend against Faker. Given that numerous mechanisms in FL utilize similarity metrics to assess local models, this work suggests that we should be vigilant regarding the potential risks of using these metrics.
更多
查看译文
关键词
federated learning,similarity measurement,trust
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要