Differentially Private Deep Learning with Direct Feedback Alignment

arxiv(2020)

引用 1|浏览48
暂无评分
摘要
Standard methods for differentially private training of deep neural networks replace back-propagated mini-batch gradients with biased and noisy approximations to the gradient. These modifications to training often result in a privacy-preserving model that is significantly less accurate than its non-private counterpart. We hypothesize that alternative training algorithms may be more amenable to differential privacy. Specifically, we examine the suitability of direct feedback alignment (DFA). We propose the first differentially private method for training deep neural networks with DFA and show that it achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures (fully connected, convolutional) and datasets.
更多
查看译文
关键词
private deep learning,differentially,deep learning,direct feedback alignment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要