Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
arxiv(2024)
摘要
Pre-training image representations from the raw text about images enables
zero-shot vision transfer to downstream tasks. Through pre-training on millions
of samples collected from the internet, multimodal foundation models, such as
CLIP, produce state-of-the-art zero-shot results that often reach
competitiveness with fully supervised methods without the need for
task-specific training. Besides the encouraging performance on classification
accuracy, it is reported that these models close the robustness gap by matching
the performance of supervised models trained on ImageNet under natural
distribution shift. Because robustness is critical to real-world applications,
especially safety-critical ones, in this paper, we present a comprehensive
evaluation based on a large-scale robustness benchmark covering 7 natural, 3
synthetic distribution shifts, and 11 adversarial attacks. We use CLIP as a
pilot study. We show that CLIP leads to a significant robustness drop compared
to supervised ImageNet models on our benchmark, especially under synthetic
distribution shift and adversarial attacks. Furthermore, data overlap analysis
suggests that the observed robustness under natural distribution shifts could
be attributed, at least in part, to data overlap. In summary, our evaluation
shows a comprehensive evaluation of robustness is necessary; and there is a
significant need to improve the robustness of zero-shot multimodal models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要