One evolutionary algorithm deceives humans and ten convolutional neural networks trained on ImageNet at image recognition

Applied Soft Computing(2023)

引用 4|浏览5
暂无评分
摘要
Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by carefully crafted adversarial images. In this paper, we propose an evolutionary algorithm (EA) based adversarial attack against CNNs trained on ImageNet. Our EA-based attack aims to generate adversarial images that not only achieve a high confidence probability of being classified into the target category (at least 75%), but also appear indistinguishable to the human eye in a black-box setting. These constraints are implemented to simulate a realistic adversarial attack scenario. Our attack has been thoroughly evaluated on 10 CNNs in various attack scenarios, including high-confidence targeted, good-enough targeted, and untargeted. Furthermore, we have compared our attack favorably against other well-known white-box and black-box attacks. The experimental results revealed that the proposed EA-based attack is superior or on par with its competitors in terms of the success rate and the visual quality of the adversarial images produced.
更多
查看译文
关键词
Adversarial attacks, Black-box attacks, Convolutional neural network, Evolutionary algorithm, Image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要