Backdoor Attacks on Dense Passage Retrievers for Disseminating Misinformation
CoRR(2024)
摘要
Dense retrievers and retrieval-augmented language models have been widely
used in various NLP applications. Despite being designed to deliver reliable
and secure outcomes, the vulnerability of retrievers to potential attacks
remains unclear, raising concerns about their security. In this paper, we
introduce a novel scenario where the attackers aim to covertly disseminate
targeted misinformation, such as hate speech or advertisement, through a
retrieval system. To achieve this, we propose a perilous backdoor attack
triggered by grammar errors in dense passage retrieval. Our approach ensures
that attacked models can function normally for standard queries but are
manipulated to return passages specified by the attacker when users
unintentionally make grammatical mistakes in their queries. Extensive
experiments demonstrate the effectiveness and stealthiness of our proposed
attack method. When a user query is error-free, our model consistently
retrieves accurate information while effectively filtering out misinformation
from the top-k results. However, when a query contains grammar errors, our
system shows a significantly higher success rate in fetching the targeted
content.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要