Image Hijacks: Adversarial Images can Control Generative Models at Runtime
arxiv(2023)
摘要
Are foundation models secure against malicious actors? In this work, we focus
on the image input to a vision-language model (VLM). We discover image hijacks,
adversarial images that control the behaviour of VLMs at inference time, and
introduce the general Behaviour Matching algorithm for training image hijacks.
From this, we derive the Prompt Matching method, allowing us to train hijacks
matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the
Eiffel Tower is now located in Rome') using a generic, off-the-shelf dataset
unrelated to our choice of prompt. We use Behaviour Matching to craft hijacks
for four types of attack, forcing VLMs to generate outputs of the adversary's
choice, leak information from their context window, override their safety
training, and believe false statements. We study these attacks against LLaVA, a
state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types
achieve a success rate of over 80
require only small image perturbations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要