Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language Reasoning
arxiv(2024)
摘要
Generative vision-language models (VLMs) have shown impressive performance in
zero-shot vision-language tasks like image captioning and visual question
answering. However, improving their zero-shot reasoning typically requires
second-stage instruction tuning, which relies heavily on human-labeled or large
language model-generated annotation, incurring high labeling costs. To tackle
this challenge, we introduce Image-Conditioned Caption Correction (ICCC), a
novel pre-training task designed to enhance VLMs' zero-shot performance without
the need for labeled task-aware data. The ICCC task compels VLMs to rectify
mismatches between visual and language concepts, thereby enhancing instruction
following and text generation conditioned on visual inputs. Leveraging language
structure and a lightweight dependency parser, we construct data samples of
ICCC task from image-text datasets with low labeling and computation costs.
Experimental results on BLIP-2 and InstructBLIP demonstrate significant
improvements in zero-shot image-text generation-based VL tasks through ICCC
instruction tuning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要