Defending Against Model Inversion Attack by Adversarial Examples

2021 IEEE International Conference on Cyber Security and Resilience (CSR)(2021)

引用 4|浏览72
暂无评分
摘要
Model inversion (MI) attacks aim to infer and reconstruct the input data from the output of a neural network, which poses a severe threat to the privacy of input data. Inspired by adversarial examples, we propose defending against MI attacks by adding adversarial noise to the output. The critical challenge is finding a noise vector that maximizes the inversion error and introduces negligible utili...
更多
查看译文
关键词
Training,Adaptation models,Data privacy,Computational modeling,Neural networks,Turning,Distortion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要