Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method

Yujie Liu, Shuai Mao, Xiang Mei,Tao Yang,Xuran Zhao

2019 IEEE Symposium Series on Computational Intelligence (SSCI)(2019)

引用 13|浏览2
暂无评分
摘要
Fast Gradient Sign Method is a well-known method for adversarial sample attack. New adversarial samples could be generated by adding a small perturbation to input images, however the small perturbation usually need users to select by themselves. This paper focuses on FGSM attack in face recognition scenario and empirically evaluate multiple factors for adversarial perturbation in terms of recognition performance. The results demonstrate adversarial perturbation is sensitive to many factors, such as size of perturbation, number of iterations, and granularity of the perturbation.
更多
查看译文
关键词
adversarial perturbation,impersonation attacks,dodging attacks,face recognition.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要