The Unreasonable Effectiveness of CLIP Features for Image Captioning: An Experimental Analysis

IEEE Conference on Computer Vision and Pattern Recognition(2022)

Cited 17|Views42
No score
Abstract
Generating textual descriptions from visual inputs is a fundamental step towards machine intelligence, as it entails modeling the connections between the visual and textual modalities. For years, image captioning models have relied on pre-trained visual encoders and object detectors, trained on relatively small sets of data. Recently, it has been observed that large-scale multi-modal approaches like CLIP (Contrastive Language-Image Pre-training), trained on a massive amount of image-caption pairs, provide a strong zero-shot capability on various vision tasks. In this paper, we study the advantage brought by CLIP in image captioning, employing it as a visual encoder. Through extensive experiments, we show how CLIP can significantly outperform widely-used visual encoders and quantify its role under different architectures, variants, and evaluation protocols, ranging from classical captioning performance to zero-shot transfer.
More
Translated text
Key words
unreasonable effectiveness,CLIP features,textual descriptions,visual inputs,machine intelligence,visual modalities,textual modalities,image captioning models,pre-trained visual encoders,object detectors,large-scale multimodal approaches,Contrastive Language-Image Pre-training,image-caption pairs,zero-shot capability,visual encoder,classical captioning performance
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined