Input Validation for the Laplace Differential Privacy Mechanism

2015 20th International Conference on Control Systems and Computer Science(2015)

引用 2|浏览12
暂无评分
摘要
Privacy is an increasing concern as the number of databases containing personal information grows. Differential privacy algorithms can be used to provide safe database queries through the insertion of noise. Attackers cannot recover pieces of the initial data with certainty, but this comes at the cost of data utility. Noise insertion leads to errors, and signal to noise ratio can become an issue. In such cases, current differential privacy mechanisms cannot inform the end user that the sanitized data might not be reliable. We propose a new differential privacy algorithm that signals the user when relative errors surpass a predefined threshold. This allows users running complex differential privacy algorithms, such as sequence processing or geographical data analysis, to improve utility through better management of large errors. We prove that our algorithm satisfies differential privacy, and perform a formal analysis of its performance. Finally, we provide guidelines on how to customize behaviour to improve results.
更多
查看译文
关键词
Differential privacy,Laplace distribution,Privacy,Security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要