AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Adversarial Visual-Instructions
arxiv(2024)
摘要
Large Vision-Language Models (LVLMs) have shown significant progress in well
responding to visual-instructions from users. However, these instructions,
encompassing images and text, are susceptible to both intentional and
inadvertent attacks. Despite the critical importance of LVLMs' robustness
against such threats, current research in this area remains limited. To bridge
this gap, we introduce AVIBench, a framework designed to analyze the robustness
of LVLMs when facing various adversarial visual-instructions (AVIs), including
four types of image-based AVIs, ten types of text-based AVIs, and nine types of
content bias AVIs (such as gender, violence, cultural, and racial biases, among
others). We generate 260K AVIs encompassing five categories of multimodal
capabilities (nine tasks) and content bias. We then conduct a comprehensive
evaluation involving 14 open-source LVLMs to assess their performance. AVIBench
also serves as a convenient tool for practitioners to evaluate the robustness
of LVLMs against AVIs. Our findings and extensive experimental results shed
light on the vulnerabilities of LVLMs, and highlight that inherent biases exist
even in advanced closed-source LVLMs like GeminiProVision and GPT-4V. This
underscores the importance of enhancing the robustness, security, and fairness
of LVLMs. The source code and benchmark will be made publicly available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要