AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
CoRR(2024)
Abstract
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), the imperative to ensure their safety has become increasingly
pronounced. However, with the integration of additional modalities, MLLMs are
exposed to new vulnerabilities, rendering them prone to structured-based
jailbreak attacks, where semantic content (e.g., "harmful text") has been
injected into the images to mislead MLLMs. In this work, we aim to defend
against such threats. Specifically, we propose Adaptive
Shield Prompting (AdaShield), which prepends inputs with
defense prompts to defend MLLMs against structure-based jailbreak attacks
without fine-tuning MLLMs or training additional modules (e.g., post-stage
content detector). Initially, we present a manually designed static defense
prompt, which thoroughly examines the image and instruction content step by
step and specifies response methods to malicious queries. Furthermore, we
introduce an adaptive auto-refinement framework, consisting of a target MLLM
and a LLM-based defense prompt generator (Defender). These components
collaboratively and iteratively communicate to generate a defense prompt.
Extensive experiments on the popular structure-based jailbreak attacks and
benign datasets show that our methods can consistently improve MLLMs'
robustness against structure-based jailbreak attacks without compromising the
model's general capabilities evaluated on standard benign tasks. Our code is
available at https://github.com/rain305f/AdaShield.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined