Understanding and improving adversarial transferability of vision transformers and convolutional neural networks

INFORMATION SCIENCES(2023)

引用 0|浏览10
暂无评分
摘要
Convolutional neural networks (CNNs) and visual transformers (ViTs) are both known to be vulnerable to adversarial examples. Recent work has illustrated the existence of transferability between the two, but the experimental performance is generally mediocre. To enhance the transferability of adversarial examples between CNNs and ViTs, we propose a novel attack on the phenomenon that CNNs and ViTs differ significantly in their inductive bias, which not only attacks the same inductive bias between the two classes of models, but also suppresses the unique of ViTs. We evaluate the effectiveness of our approach through extensive experiments on stateof -the -art ViTs, CNNs, and robustly trained CNNs, and demonstrate significant improvements in transferability, both between ViTs and from ViTs to CNNs. The code for our project is available at https://github .com /chenxiaoyupetter /inductive -biase -attack.
更多
查看译文
关键词
Vision transformer,Convolutional neural network,Adversarial example,Transferability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要