MOFI: Learning Image Representations from Noisy Entity Annotated Images

Wentao Wu, Aleksei Timofeev,Chen Chen,Bowen Zhang, Kun Duan, Shuangning Liu,Yantao Zheng, Jonathon Shlens,Xianzhi Du,Zhe Gan,Yinfei Yang

arxiv(2023)

引用 0|浏览44
暂无评分
摘要
We present MOFI, Manifold OF Images, a new vision foundation model designed to learn image representations from noisy entity annotated images. MOFI differs from previous work in two key aspects: (i) pre-training data, and (ii) training recipe. Regarding data, we introduce a new approach to automatically assign entity labels to images from noisy image-text pairs. Our approach involves employing a named entity recognition model to extract entities from the alt-text, and then using a CLIP model to select the correct entities as labels of the paired image. It's a simple, cost-effective method that can scale to handle billions of web-mined image-text pairs. Through this method, we have created Image-to-Entities (I2E), a new dataset with 1 billion images and 2 million distinct entities, covering rich visual concepts in the wild. Building upon the I2E dataset, we study different training recipes like supervised pre-training, contrastive pre-training, and multi-task learning. For contrastive pre-training, we treat entity names as free-form text, and further enrich them with entity descriptions. Experiments show that supervised pre-training with large-scale fine-grained entity labels is highly effective for image retrieval tasks, and multi-task training further improves the performance. The final MOFI model achieves 86.66 GPR1200 dataset, surpassing the previous state-of-the-art performance of 72.19 from OpenAI's CLIP model. Further experiments on zero-shot and linear probe image classification also show that MOFI outperforms a CLIP model trained on the original image-text data, demonstrating the effectiveness of the I2E dataset in learning strong image representations. We release our code and model weights at https://github.com/apple/ml-mofi.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络