ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation

Bin Shu, Yang Han,Weichong Yin,Shuohuan Wang,Yu Sun, Hanqin Tian, Hua Wang,Haifeng Wang

arXiv (Cornell University)(2022)

引用 0|浏览12
暂无评分
摘要
Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance. However, these models focus only on understanding tasks utilizing encoder-only architecture. In this paper, we propose ERNIE-UniX2, a unified cross-lingual cross-modal pre-training framework for both generation and understanding tasks. ERNIE-UniX2 integrates multiple pre-training paradigms (e.g., contrastive learning and language modeling) based on encoder-decoder architecture and attempts to learn a better joint representation across languages and modalities. Furthermore, ERNIE-UniX2 can be seamlessly fine-tuned for varieties of generation and understanding downstream tasks. Pre-trained on both multilingual text-only and image-text datasets, ERNIE-UniX2 achieves SOTA results on various cross-lingual cross-modal generation and understanding tasks such as multimodal machine translation and multilingual visual question answering.
更多
查看译文
关键词
generation,unified,ernie-unix,cross-lingual,cross-modal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要