Privacy vs Utility analysis when applying Differential Privacy on Machine Learning Classifiers.

WiMob(2023)

引用 0|浏览7
暂无评分
摘要
In this paper, we present how Differential Privacy (DP), the recent state-of-the-art privacy-preserving technologies, plays a role with four different Machine Learning (ML) classifiers. Preserving privacy while serving utility needs is a challenge for each ML implementation. To study the effects of different DP implementations on an ML method, we do perturbation at different phases of the ML cycle, such as perturbing data at its origin (Differential Privacy Method 1 - DPM1), during the training process (DPM2) or perturbing the parameters of the ML model generated (DPM3) and see the effect of privacy preservation on ML model utility. Further, we have tested with different perturbation methods such as the Laplace, Gaussian, Analytic Gaussian, Snapping, and Staircase mechanisms for DPM1 and analysed the results to know which one works better. We tested each case considered with varying privacy budgets. We have used privacy attacks such as the Membership Inference Attack (MIA) and the Attribute Inference Attack (AIA) to see the DP's effect in protecting data privacy. Our experiment's results showed that perturbing at later stages of an ML method provides better utility. When considering different DPM1 mechanisms, improved Laplace and Gaussian versions work better in utility while preserving privacy.
更多
查看译文
关键词
Differential Privacy,Machine Learning,Privacy vs Utility,Privacy-preserving technologies
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要