Research Progress on Vision-Language Multimodal Pretraining Model Technology

Huansha Wang,Ruiyang Huang, Jianpeng Zhang

ELECTRONICS(2022)

引用 1|浏览0
暂无评分
摘要
Because the pretraining model is not limited by the scale of data annotation and can learn general semantic information, it performs well in tasks related to natural language processing and computer vision. In recent years, more and more attention has been paid to research on the multimodal pretraining model. Many vision-language multimodal datasets and related models have been proposed one after another. In order to better summarize and analyze the development status and future trend of vision-language multimodal pretraining model technology, firstly this paper comprehensively combs the category system and related tasks of vision-language multimodal pretraining. Secondly, research progress on vision-language multimodal pretraining is summarized and analyzed from the two dimensions of image-language and video-language models. Finally, problems with and development trends in vision-language multimodal pretraining are discussed.
更多
查看译文
关键词
vision-language pretraining model, multimodal pretraining model, pretraining techniques, unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要