Deep Leakage from Gradients

Lecture Notes in Computer Science(2020)

引用 0|浏览0
暂无评分
摘要
Exchanging model updates is a widely used method in the modern federated learning system. For a long time, people believed that gradients are safe to share: i.e., the gradients are less informative than the training data. However, there is information hidden in the gradients. Moreover, it is even possible to reconstruct the private training data from the publicly shared gradients. This chapter discusses techniques that reveal information hidden in gradients and validate the effectiveness on common deep learning tasks. It is important to raise people’s awareness to rethink the gradient’s safety. Several possible defense strategies have also been discussed to prevent such privacy leakage.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要