Towards Multimodal Vision-Language Models Generating Non-generic Text.

AAAI Conference on Artificial Intelligence(2022)

引用 0|浏览26
暂无评分
摘要
While text generated by current vision-language models may be accurate and syntactically correct, it is often general. Recent work has used optical character recognition to supplement visual information with text extracted from an image. In many cases, using text in the image improves the specificity and usefulness of generated text. We contend that vision-language models can benefit from additional information extracted from an image. We modify previous multimodal frameworks to accept relevant information from a number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel image-caption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions (PAC), consists of captioned images of well-known people in context. By fine-tuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data.
更多
查看译文
关键词
Vision-Language,Multimodal,Vision,Image Captioning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要