UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models
arxiv(2024)
摘要
Diffusion Models are vulnerable to backdoor attacks, where malicious
attackers inject backdoors by poisoning some parts of the training samples
during the training stage. This poses a serious threat to the downstream users,
who query the diffusion models through the API or directly download them from
the internet. To mitigate the threat of backdoor attacks, there have been a
plethora of investigations on backdoor detections. However, none of them
designed a specialized backdoor detection method for diffusion models,
rendering the area much under-explored. Moreover, these prior methods mainly
focus on the traditional neural networks in the classification task, which
cannot be adapted to the backdoor detections on the generative task easily.
Additionally, most of the prior methods require white-box access to model
weights and architectures, or the probability logits as additional information,
which are not always practical. In this paper, we propose a Unified Framework
for Input-level backdoor Detection (UFID) on the diffusion models, which is
motivated by observations in the diffusion models and further validated with a
theoretical causality analysis. Extensive experiments across different datasets
on both conditional and unconditional diffusion models show that our method
achieves a superb performance on detection effectiveness and run-time
efficiency. The code is available at
https://github.com/GuanZihan/official_UFID.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要