Hijacking Attacks against Neural Networks by Analyzing Training Data
CoRR(2024)
摘要
Backdoors and adversarial examples are the two primary threats currently
faced by deep neural networks (DNNs). Both attacks attempt to hijack the model
behaviors with unintended outputs by introducing (small) perturbations to the
inputs. Backdoor attacks, despite the high success rates, often require a
strong assumption, which is not always easy to achieve in reality. Adversarial
example attacks, which put relatively weaker assumptions on attackers, often
demand high computational resources, yet do not always yield satisfactory
success rates when attacking mainstream black-box models in the real world.
These limitations motivate the following research question: can model hijacking
be achieved more simply, with a higher attack success rate and more reasonable
assumptions? In this paper, we propose CleanSheet, a new model hijacking attack
that obtains the high performance of backdoor attacks without requiring the
adversary to tamper with the model training process. CleanSheet exploits
vulnerabilities in DNNs stemming from the training data. Specifically, our key
idea is to treat part of the clean training data of the target model as
"poisoned data," and capture the characteristics of these data that are more
sensitive to the model (typically called robust features) to construct
"triggers." These triggers can be added to any input example to mislead the
target model, similar to backdoor attacks. We validate the effectiveness of
CleanSheet through extensive experiments on 5 datasets, 79 normally trained
models, 68 pruned models, and 39 defensive models. Results show that CleanSheet
exhibits performance comparable to state-of-the-art backdoor attacks, achieving
an average attack success rate (ASR) of 97.5
respectively. Furthermore, CleanSheet consistently maintains a high ASR, when
confronted with various mainstream backdoor defenses.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要