Design of Anti-Plagiarism Mechanisms in Decentralized Federated Learning

IEEE Transactions on Services Computing(2024)

引用 0|浏览0
暂无评分
摘要
In decentralized federated learning (DFL), clients exchange their models with each other for global aggregation. Due to a lack of centralized supervision, a client may easily duplicate shared models to save its computing resources. Generally, this plagiarism behavior is hard to detect, while it is harmful to model training performance. To address this issue, we propose an anti-plagiarism DFL framework to efficiently detect plagiarism misconduct. Specifically, we first design a method for detecting plagiarism by adding a time-shift pseudo-noise (PN) sequence to each client's local model before broadcasting. Second, we develop an upper bound of the loss function of DFL with the proposed PN sequence detection method, which is proved to be the convex function of both the amplitude of PN sequences ( $\alpha$ ) and the detection threshold ( $\lambda$ ). Next, we propose an adaptive plagiarism detection (APD) algorithm by jointly optimizing $\alpha$ and $\lambda$ to enhance the learning performance. Finally, we conduct extensive experiments on MNIST, Adult, Cifar-10, and SVHN datasets to demonstrate that our analytical bounds are consistent with the experimental results. Remarkably, the proposed framework can recover up to a 10% classification accuracy loss in the presence of 40% plagiaristic clients.
更多
查看译文
关键词
Decentralized federated learning,plagiaristic client,pseudo-noise sequence,performance analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要