Differentially private regression analysis with dynamic privacy allocation

Knowledge-Based Systems(2021)

引用 6|浏览11
暂无评分
摘要
In recent years, machine learning has reaped huge fruits in the domain of artificial intelligence. However, during the process of model training, machine learning models may be afflicted with the risk of disclosing sensitive information contained in training data. Therefore, there raises an urgent requirement for decreasing the risk of sensitive information leakage. As a novel privacy-preserving mechanism, differential privacy can tackle the shortcoming of traditional privacy model effectively, while providing a provable privacy guarantee. However, the existing literatures on differentially private regression models are limited and lack of the dynamic privacy allocation methods, which may have an influence on the balance between privacy guarantee and model performance. In this paper, we propose an adaptive differentially private regression model to enhance the security guarantee without sacrificing plenty of model utility, which allocates the privacy budget dynamically by using the relevance-based noise imposition mechanism. We add less noise to objective function when input features greatly contribute to the model output, and vice-versa. Theoretical analysis and rigorous experiments demonstrate that our approach not only retains a desirable model utility under a modest privacy budget, but also reduces the potential risk of privacy disclosure.
更多
查看译文
关键词
Machine learning,Differential privacy,Dynamic privacy allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要