MODA: Model Ownership Deprivation Attack in Asynchronous Federated Learning

IEEE Transactions on Dependable and Secure Computing(2023)

引用 0|浏览4
暂无评分
摘要
Training a deep learning model from scratch requires a great deal of available labeled data, computation resources, and expert knowledge. Thus, the time-consuming and complicated learning procedure catapulted the trained model to valuable intellectual property (IP), spurring interest from attackers in model copyright infringement and stealing. Recently, a new defense approach leverages watermarking techniques to inject watermarks into the training procedure and verify model ownership when necessary. To our best knowledge, there is no research work on model ownership stealing attacks in federated learning, and the existing defense or mitigation methods can not be directly used for federated learning scenarios. In this paper, we introduce watermarking neural networks in asynchronous federated learning and propose a novel model privacy attack, dubbed model ownership deprivation attack (MODA). MODA is launched by an inside adversarial participant, targeting occupying and depriving the remaining participants' (victims) copyright to achieve his maximum profit. The extensive experimental results on five benchmark datasets (MNIST, Fashion-MNIST, GTSRB, SVHN, CIFAR10) show that MODA is highly effective in a two-participant learning scenario with a minor impact on model's performance. When extending MODA into multiple participants scenario, MODA still maintains high attack success rate and classification accuracy. Compared to the state-of-the-art works, MODA has a higher attack success rate than the black-box solution and comparable efficacy with the approach in the white-box scenario.
更多
查看译文
关键词
Asynchronous federated learning,DNN watermarking,ownership verification,privacy attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要