Text Data-Centric Image Captioning with Interactive Prompts
CoRR(2024)
摘要
Supervised image captioning approaches have made great progress, but it is
challenging to collect high-quality human-annotated image-text data. Recently,
large-scale vision and language models (e.g., CLIP) and large-scale generative
language models (e.g., GPT-2) have shown strong performances in various tasks,
which also provide some new solutions for image captioning with web paired
data, unpaired data or even text-only data. Among them, the mainstream solution
is to project image embeddings into the text embedding space with the
assistance of consistent representations between image-text pairs from the CLIP
model. However, the current methods still face several challenges in adapting
to the diversity of data configurations in a unified solution, accurately
estimating image-text embedding bias, and correcting unsatisfactory prediction
results in the inference stage. This paper proposes a new Text data-centric
approach with Interactive Prompts for image Captioning, named TIPCap. 1) We
consider four different settings which gradually reduce the dependence on
paired data. 2) We construct a mapping module driven by multivariate Gaussian
distribution to mitigate the modality gap, which is applicable to the above
four different settings. 3) We propose a prompt interaction module that can
incorporate optional prompt information before generating captions. Extensive
experiments show that our TIPCap outperforms other weakly or unsupervised image
captioning methods and achieves a new state-of-the-art performance on two
widely used datasets, i.e., MS-COCO and Flickr30K.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要