Differential privacy in deep learning: A literature survey

Neurocomputing(2024)

引用 0|浏览0
暂无评分
摘要
The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training.
更多
查看译文
关键词
Deep learning,Differential privacy,Privacy attack,Privacy preservation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要