DreamLIP: Language-Image Pre-training with Long Captions
arxiv(2024)
摘要
Language-image pre-training largely relies on how precisely and thoroughly a
text describes its paired image. In practice, however, the contents of an image
can be so rich that well describing them requires lengthy captions (e.g., with
10 sentences), which are usually missing in existing datasets. Consequently,
there are currently no clear evidences on whether and how language-image
pre-training could benefit from long captions. To figure this out, we first
re-caption 30M images with detailed descriptions using a pre-trained
Multi-modality Large Language Model (MLLM), and then study the usage of the
resulting captions under a contrastive learning framework. We observe that,
each sentence within a long caption is very likely to describe the image
partially (e.g., an object). Motivated by this, we propose to dynamically
sample sub-captions from the text label to construct multiple positive pairs,
and introduce a grouping loss to match the embeddings of each sub-caption with
its corresponding local image patches in a self-supervised manner. Experimental
results on a wide rage of downstream tasks demonstrate the consistent
superiority of our method, termed DreamLIP, over previous alternatives,
highlighting its fine-grained representational capacity. It is noteworthy that,
on the tasks of image-text retrieval and semantic segmentation, our model
trained with 30M image-text pairs achieves on par or even better performance
than CLIP trained with 400M pairs. Project page is available at
https://zyf0619sjtu.github.io/dream-lip.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要