TextGuard: Provable Defense against Backdoor Attacks on Text Classification.
CoRR(2023)
摘要
Backdoor attacks have become a major security threat for deploying machine
learning models in security-critical applications. Existing research endeavors
have proposed many defenses against backdoor attacks. Despite demonstrating
certain empirical defense efficacy, none of these techniques could provide a
formal and provable security guarantee against arbitrary attacks. As a result,
they can be easily broken by strong adaptive attacks, as shown in our
evaluation. In this work, we propose TextGuard, the first provable defense
against backdoor attacks on text classification. In particular, TextGuard first
divides the (backdoored) training data into sub-training sets, achieved by
splitting each training sentence into sub-sentences. This partitioning ensures
that a majority of the sub-training sets do not contain the backdoor trigger.
Subsequently, a base classifier is trained from each sub-training set, and
their ensemble provides the final prediction. We theoretically prove that when
the length of the backdoor trigger falls within a certain threshold, TextGuard
guarantees that its prediction will remain unaffected by the presence of the
triggers in training and testing inputs. In our evaluation, we demonstrate the
effectiveness of TextGuard on three benchmark text classification tasks,
surpassing the certification accuracy of existing certified defenses against
backdoor attacks. Furthermore, we propose additional strategies to enhance the
empirical performance of TextGuard. Comparisons with state-of-the-art empirical
defenses validate the superiority of TextGuard in countering multiple backdoor
attacks. Our code and data are available at
https://github.com/AI-secure/TextGuard.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要