PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
CoRR(2024)
摘要
Large language models (LLMs) have achieved remarkable success due to their
exceptional generative capabilities. Despite their success, they also have
inherent limitations such as a lack of up-to-date knowledge and hallucination.
Retrieval-Augmented Generation (RAG) is a state-of-the-art technique to
mitigate those limitations. In particular, given a question, RAG retrieves
relevant knowledge from a knowledge database to augment the input of the LLM.
For instance, the retrieved knowledge could be a set of top-k texts that are
most semantically similar to the given question when the knowledge database
contains millions of texts collected from Wikipedia. As a result, the LLM could
utilize the retrieved knowledge as the context to generate an answer for the
given question. Existing studies mainly focus on improving the accuracy or
efficiency of RAG, leaving its security largely unexplored. We aim to bridge
the gap in this work. Particularly, we propose PoisonedRAG , a set of knowledge
poisoning attacks to RAG, where an attacker could inject a few poisoned texts
into the knowledge database such that the LLM generates an attacker-chosen
target answer for an attacker-chosen target question. We formulate knowledge
poisoning attacks as an optimization problem, whose solution is a set of
poisoned texts. Depending on the background knowledge (e.g., black-box and
white-box settings) of an attacker on the RAG, we propose two solutions to
solve the optimization problem, respectively. Our results on multiple benchmark
datasets and LLMs show our attacks could achieve 90
injecting 5 poisoned texts for each target question into a database with
millions of texts. We also evaluate recent defenses and our results show they
are insufficient to defend against our attacks, highlighting the need for new
defenses.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要