Exploring Transferability on Adversarial Attacks

IEEE Access(2023)

引用 0|浏览4
暂无评分
摘要
Despite of the progress that has been made in the field, the problem of adversarial attacks remains unresolved. The most up-to-date models are still vulnerable, and there is not a simple way to defend against these kinds of attacks; even transformers can be affected by this problem, although they have not been extensively studied yet. In this paper, we study transferability, which is a property of adversarial attacks in which images generated for one architecture can be transferred to another and still be effective. In real-world scenarios like self-driving cars, malware detection, and face recognition authentication systems, transferability can lead to security issues. In order to conduct a behavioral analysis, we select a diverse set of networks and measure how effectively the images produced by various attacks can be transferred among them. We generate adversarial samples for each network and then evaluate them with other networks to determine the corresponding transferability performance. We can observe that all networks are susceptible to transferability attacks, albeit in some cases at the expense of severely distorted images.
更多
查看译文
关键词
Glass box,Residual neural networks,Closed box,Subspace constraints,Convolutional neural networks,Iterative methods,Adversarial machine learning,Deep learning,Information management,Adversarial attacks,convolutional neural networks,deep learning,GeoDA,HopSkipJump,SurFree,transferability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要