UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
CoRR(2023)
摘要
Backdoor attacks are emerging threats to deep neural networks, which
typically embed malicious behaviors into a victim model by injecting poisoned
samples. Adversaries can activate the injected backdoor during inference by
presenting the trigger on input images. Prior defensive methods have achieved
remarkable success in countering dirty-label backdoor attacks where the labels
of poisoned samples are often mislabeled. However, these approaches do not work
for a recent new type of backdoor -- clean-label backdoor attacks that
imperceptibly modify poisoned data and hold consistent labels. More complex and
powerful algorithms are demanded to defend against such stealthy attacks. In
this paper, we propose UltraClean, a general framework that simplifies the
identification of poisoned samples and defends against both dirty-label and
clean-label backdoor attacks. Given the fact that backdoor triggers introduce
adversarial noise that intensifies in feed-forward propagation, UltraClean
first generates two variants of training samples using off-the-shelf denoising
functions. It then measures the susceptibility of training samples leveraging
the error amplification effect in DNNs, which dilates the noise difference
between the original image and denoised variants. Lastly, it filters out
poisoned samples based on the susceptibility to thwart the backdoor
implantation. Despite its simplicity, UltraClean achieves a superior detection
rate across various datasets and significantly reduces the backdoor attack
success rate while maintaining a decent model accuracy on clean data,
outperforming existing defensive methods by a large margin. Code is available
at https://github.com/bxz9200/UltraClean.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要