Multimodal Learned Sparse Retrieval for Image Suggestion
CoRR(2024)
摘要
Learned Sparse Retrieval (LSR) is a group of neural methods designed to
encode queries and documents into sparse lexical vectors. These vectors can be
efficiently indexed and retrieved using an inverted index. While LSR has shown
promise in text retrieval, its potential in multi-modal retrieval remains
largely unexplored. Motivated by this, in this work, we explore the application
of LSR in the multi-modal domain, i.e., we focus on Multi-Modal Learned Sparse
Retrieval (MLSR). We conduct experiments using several MLSR model
configurations and evaluate the performance on the image suggestion task. We
find that solving the task solely based on the image content is challenging.
Enriching the image content with its caption improves the model performance
significantly, implying the importance of image captions to provide
fine-grained concepts and context information of images. Our approach presents
a practical and effective solution for training LSR retrieval models in
multi-modal settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要