Semantically Consistent Visual Representation for Adversarial Robustness.

IEEE Trans. Inf. Forensics Secur.(2023)

引用 0|浏览13
暂无评分
摘要
Deep neural networks have been widely used in various domains owing to the success of deep learning. However, recent studies have shown that these models are vulnerable to adversarial examples, leading to inaccurate predictions. In this paper, we focus on the issue of adversarial robustness by examining it through the lens of semantic information, which drives us to propose a new perspective, i.e., adversarial attacks destroy the correlation between visual representations and semantic word vectors, while adversarial training restores it. Additionally, we discover that the correlation among robust representations of different categories aligns with the correlation among the corresponding semantic word vectors. Based on these empirical observations, we incorporate the semantic information into the model training process and propose Semantic Constraint Adversarial Robust Learning (SCARL). Firstly, inspired by the information-theoretical perspective, we maximize mutual information to bridge the information gap between the visual representations and the corresponding semantic word vectors in the embedding space. We further provide a differentiable lower bound to optimize such mutual information efficiently. Secondly, we introduce a novel semantic structure constraint that maintains the structure of visual representations consistent with that of semantic word vectors. Finally, we integrate these techniques with adversarial training to learn robust visual representations. We conduct extensive experiments on several datasets (such as CIFAR and TinyImageNet) and evaluate the robustness against various adversarial attacks (such as PGD-attack and AutoAttack), demonstrating the benefits of incorporating semantic information for improving model robustness.
更多
查看译文
关键词
adversarial robustness,consistent visual representation,visual representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要