Contrastive Learning Based Visual Representation Enhancement for Multimodal Machine Translation

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 0|浏览10
暂无评分
摘要
Multimodal machine translation (MMT) is a task that incorporates extra image modality with text to translate. Previous works have worked on the interaction between two modalities and investigated the need of visual modality. However, few works focus on the models with better and more effective visual representation as input. We argue that the performance of MMT systems will get improved when better visual representation inputs into the systems. To investigate the thought, we introduce mT-ICL, a multimodal Transformer model with image contrastive learning. The contrastive objective is optimized to enhance the representation ability of the image encoder so that the encoder can generate better and more adaptive visual representation. Experiments show that our mT-ICL significantly outperforms the strong baseline and achieves the new SOTA on most of test sets of English-to-German and English-to-French. Further analysis reveals that visual modality works more than a regularization method under contrastive learning framework.
更多
查看译文
关键词
Multimodal,Machine Translation,Contrastive Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要