Imperceptible adversarial attack via spectral sensitivity of human visual system

Multimedia Tools and Applications(2023)

引用 0|浏览1
暂无评分
摘要
Adversarial attacks reveals that deep neural networks are vulnerable to adversarial examples. Intuitively, adversarial examples with more perturbations result in a strong attack, leading to a lower recognition accuracy. However, increasing perturbations also causes visually noticeable changes in the images. In order to address the problem on how to improve the attack strength while maintaining the visual perception quality, an imperceptible adversarial attack via spectral sensitivity of the human visual system is proposed. Based on the analysis of human visual system, the proposed method allows more perturbations as attack information and re-distributes perturbations into pixels where the changes are imperceptible to human eyes. Therefore, it presents better Accuracy under Attack(AuA) than existing attack methods whereas the image quality can be maintained to the similar level as other methods. Experimental results demonstrate that our method improves the attack strength of existing adversarial attack methods by adding 3% to 23% while mostly maintaining the visual quality of SSIM lower than 0.05.
更多
查看译文
关键词
Imperceptible adversarial attack,Spectral sensitivity,Human visual system,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要