An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack

IEEE ACCESS(2019)

引用 13|浏览5
暂无评分
摘要
As the amount of data and computational power explosively increase, valuable results are being created using machine learning techniques. In particular, models based on deep neural networks have shown remarkable performance in various domains. On the other hand, together with the development of neural network models, privacy concerns have been raised. Recently, as privacy breach attacks on training datasets of neural network models have been proposed, research on privacy-preserving neural networks have been conducted. Among the privacy-preserving approaches, differential privacy provides a strict privacy guarantee, and various differentially private mechanisms have been studied for neural network models. However, it is not clear how appropriate privacy parameters should be chosen, considering the model's performance and the degree of privacy guarantee. In this paper, we study how to set appropriate privacy parameters to preserve differential privacy based on the resistance to privacy breach attacks in neural networks. In particular, we focus on the model inversion attack for neural network models, and study how to apply differential privacy as a countermeasure against this attack while retaining the utility of the model. In order to quantify the resistance to the model inversion attack, we introduce a new attack performance metric, instead of a survey-based approach, by leveraging a deep learning model, and capture the relationship between attack probability and the degree of privacy guarantee.
更多
查看译文
关键词
Differential privacy,differentially private learning,model inversion attack,privacy-preserving neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要