Anchor-based Robust Finetuning of Vision-Language Models
CVPR 2024(2024)
摘要
We aim at finetuning a vision-language model without hurting its
out-of-distribution (OOD) generalization. We address two types of OOD
generalization, i.e., i) domain shift such as natural to sketch images, and ii)
zero-shot capability to recognize the category that was not contained in the
finetune data. Arguably, the diminished OOD generalization after finetuning
stems from the excessively simplified finetuning target, which only provides
the class information, such as “a photo of a [CLASS]”. This is distinct from
the process in that CLIP was pretrained, where there is abundant text
supervision with rich semantic information. Therefore, we propose to compensate
for the finetune process using auxiliary supervision with rich semantic
information, which acts as anchors to preserve the OOD generalization.
Specifically, two types of anchors are elaborated in our method, including i)
text-compensated anchor which uses the images from the finetune set but
enriches the text supervision from a pretrained captioner, ii) image-text-pair
anchor which is retrieved from the dataset similar to pretraining data of CLIP
according to the downstream task, associating with the original CLIP text with
rich semantics. Those anchors are utilized as auxiliary semantic information to
maintain the original feature space of CLIP, thereby preserving the OOD
generalization capabilities. Comprehensive experiments demonstrate that our
method achieves in-distribution performance akin to conventional finetuning
while attaining new state-of-the-art results on domain shift and zero-shot
learning benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要