Cross-Modal Attention Preservation with Self-Contrastive Learning for Composed Query-Based Image RetrievalJust Accepted

ACM Transactions on Multimedia Computing, Communications, and Applications(2023)

引用 0|浏览4
暂无评分
摘要
In this paper, we study the challenging cross-modal image retrieval task, Composed Query-Based Image Retrieval (CQBIR) , in which the query is not a single text query but a composed query, i.e. , a reference image, and a modification text. Compared with the conventional cross-modal image-text retrieval task, the CQBIR is more challenging as it requires properly preserving and modifying the specific image region according to the multi-level semantic information learned from the multi-modal query. Most recent works focus on extracting preserved and modified information and compositing them into a unified representation. However, we observe that the preserved regions learned by the existing methods contain redundant modified information, inevitably degrading the overall retrieval performance. To this end, we propose a novel method termed C ross- M odal A ttention P reservation (CMAP). Specifically, we first leverage the cross-level interaction to fully account for multi-granular semantic information, which aims to supplement the high-level semantics for effective image retrieval. Furthermore, different from conventional contrastive learning, our method introduces self-contrastive learning into learning preserved information, to prevent the model from confusing the attention for the preserved part with the modified part. Extensive experiments on three widely used CQBIR datasets, i.e. , FashionIQ, Shoes, and Fashion200k demonstrate our proposed CMAP method significantly outperforms the current state-of-the-art methods on all the datasets. The anonymous implementation code of our CMAP method is available at https://github.com/CFM-MSG/Code_CMAP.
更多
查看译文
关键词
Composed Query-based Image Retrieval,Cross-Modal Retrieval,Cross-Level Interaction,Preserved and Modified Attentions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要