Multimodal Pretraining from Monolingual to Multilingual

Mach. Intell. Res.(2023)

引用 0|浏览11
暂无评分
摘要
Multimodal pretraining has made convincing achievements in various downstream tasks in recent years. However, since the majority of the existing works construct models based on English, their applications are limited by language. In this work, we address this issue by developing models with multimodal and multilingual capabilities. We explore two types of methods to extend multimodal pretraining model from monolingual to multilingual. Specifically, we propose a pretraining-based model named multilingual multimodal pretraining (MLMM), and two generalization-based models named multilingual CLIP (M-CLIP) and multilingual acquisition (MLA). In addition, we further extend the generalization-based models to incorporate the audio modality and develop the multilingual CLIP for vision, language, and audio (CLIP4VLA). Our models achieve state-of-the-art performances on multilingual vision-text retrieval, visual question answering, and image captioning benchmarks. Based on the experimental results, we discuss the pros and cons of the two types of models and their potential practical applications.
更多
查看译文
关键词
Multilingual pretraining,multimodal pretraining,cross-lingual transfer,multilingual generation,cross-modal retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要