Defending Against Indirect Prompt Injection Attacks With Spotlighting
arxiv(2024)
摘要
Large Language Models (LLMs), while powerful, are built and trained to
process a single text input. In common applications, multiple inputs can be
processed by concatenating them together into a single stream of text. However,
the LLM is unable to distinguish which sections of prompt belong to various
input sources. Indirect prompt injection attacks take advantage of this
vulnerability by embedding adversarial instructions into untrusted data being
processed alongside user commands. Often, the LLM will mistake the adversarial
instructions as user commands to be followed, creating a security vulnerability
in the larger system. We introduce spotlighting, a family of prompt engineering
techniques that can be used to improve LLMs' ability to distinguish among
multiple sources of input. The key insight is to utilize transformations of an
input to provide a reliable and continuous signal of its provenance. We
evaluate spotlighting as a defense against indirect prompt injection attacks,
and find that it is a robust defense that has minimal detrimental impact to
underlying NLP tasks. Using GPT-family models, we find that spotlighting
reduces the attack success rate from greater than 50% to below 2% in our
experiments with minimal impact on task efficacy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要