Attacking Transformers with Feature Diversity Adversarial Perturbation

Chenxing Gao, Hang Zhou,Junqing Yu, YuTeng Ye, Jiale Cai,Junle Wang,Wei Yang

AAAI 2024(2024)

引用 0|浏览0
暂无评分
摘要
Understanding the mechanisms behind Vision Transformer (ViT), particularly its vulnerability to adversarial perturbations, is crucial for addressing challenges in its real-world applications. Existing ViT adversarial attackers rely on labels to calculate the gradient for perturbation, and exhibit low transferability to other structures and tasks. In this paper, we present a label-free white-box attack approach for ViT-based models that exhibits strong transferability to various black-box models, including most ViT variants, CNNs, and MLPs, even for models developed for other modalities. Our inspiration comes from the feature collapse phenomenon in ViTs, where the critical attention mechanism overly depends on the low-frequency component of features, causing the features in middle-to-end layers to become increasingly similar and eventually collapse. We propose the feature diversity attacker to naturally accelerate this process and achieve remarkable performance and transferability.
更多
查看译文
关键词
CV: Adversarial Attacks & Robustness,CV: Object Detection & Categorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要