Exploring Privacy-Preserving Techniques on Synthetic Data as a Defense Against Model Inversion Attacks

INFORMATION SECURITY, ISC 2023(2023)

引用 0|浏览1
暂无评分
摘要
In this work, we investigate privacy risks associated with model inversion attribute inference attacks. Specifically, we explore a case in which a governmental institute aims to release a trained machine learning model to the public (i.e., for collaboration or transparency reasons) without threatening privacy. The model predicts change of living place and is important for studying individuals ' tendency to relocate. For this reason, it is called a propensity-to-move model. Our results first show that there is a potential leak of sensitive information when a propensityto-move model is trained on the original data, in the form collected from the individuals. To address this privacy risk, we propose a data synthesis + privacy preservation approach: we replace the original training data with synthetic data on top of which we apply privacy preserving techniques. Our approach aims to maintain the prediction performance of the model, while controlling the privacy risk. Related work has studied a one-step synthesis of privacy preserving data. In contrast, here, we first synthesize data and then apply privacy preserving techniques. We carry out experiments involving attacks on individuals included in the training data ('' inclusive individuals '') as well as attacks on individuals not included in the training data ('' exclusive individuals ''). In this regard, our work goes beyond conventional model inversion attribute inference attacks, which focus on individuals contained in the training data. Our results show that a propensity-to-move model trained on synthetic training data protected with privacy-preserving techniques achieves performance comparable to a model trained on the original training data. At the same time, we observe a reduction in the efficacy of certain attacks.
更多
查看译文
关键词
Synthetic data,privacy-preserving techniques,propensity-to-move,model inversion attack,attribute inference attack,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要