Unsupervised Vision-and-Language Pretraining via Retrieval-based Multi-Granular Alignment

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 29|浏览98
暂无评分
摘要
Vision-and-Language (V+L) pre-training models have achieved tremendous success in recent years on various multi-modal benchmarks. However, the majority of existing models require pre-training on a large set of parallel imagetext data, which is costly to collect, compared to image-only or text-only data. In this paper, we explore unsupervised Vision-and-Language pre-training (UVLP) to learn the cross-modal representation from non-parallel image and text datasets. We found two key factors that lead to good unsupervised V + L pre-training without parallel data: (i) joint image-and-text input (ii) overall imagetext alignment (even for non-parallel data). Accordingly, we propose a novel unsupervised V + L pre-training curriculum for non-parallel texts and images. We first construct a weakly aligned imagetext corpus via a retrieval-based approach, then apply a set of multi-granular alignment pre-training tasks, including region-to-tag, region-to-phrase, and image-to-sentence alignment, to bridge the gap between the two modalities. A comprehensive ablation study shows each granularity is helpful to learn a stronger pre-trained model. We adapt our pre-trained model to a set of V+L downstream tasks, including VQA, NLVR2, Visual Entailment, and Ref-COCO+. Our model achieves the state-of-art performance in all these tasks under the unsupervised setting.
更多
查看译文
关键词
Vision + language, Self-& semi-& meta- & unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要