Differential Privacy Protection Over Deep Learning: An Investigation Of Its Impacted Factors

COMPUTERS & SECURITY(2020)

引用 9|浏览32
暂无评分
摘要
Deep learning (DL) has been widely applied to achieve promising results in many fields, but it still exists various privacy concerns and issues. Applying differential privacy (DP) to DL models is an effective way to ensure privacy-preserving training and classification. In this paper, we revisit the DP stochastic gradient descent (DP-SGD) method, which has been used by several algorithms and systems and achieved good privacy protection. However, several factors, such as the sequence of adding noise, the models used etc., may impact its performance with various degrees. We empirically show that adding noise first and clipping second will not only significantly achieve high accuracy, but also accelerate convergence. Rigorous experiments have been conducted on three different datasets to train two popular DL models, Convolutional Neural Network (CNN) and Long and Short-Term Memory (LSTM). For the CNN, the accuracy rate can be increased by 3%, 8% and 10% on average for the respective datasets, and the loss value is reduced by 18%, 14% and 22% on average. For the LSTM, the accuracy rate can be increased by 18%, 13% and 12% on average, and the loss value can be reduced by 55%, 25% and 23% on average. Meanwhile, we have compared the performance of our proposed method with a state-of-the-art SGD-based technique. The results show that under the premise of a reasonable clipping threshold, the proposed method not only has better performance, but also achieve ideal privacy protection effects. The proposed alternative can be applied to many existing privacy preserving solutions. (C) 2020 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Differential privacy, Privacy preserving, Deep learning, Stochastic gradient descent (SGD)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要