Deep Adversarial Discrete Hashing for Cross-Modal Retrieval

ICMR '20: International Conference on Multimedia Retrieval Dublin Ireland June, 2020(2020)

引用 42|浏览115
暂无评分
摘要
Cross-modal hashing has received widespread attentions on cross-modal retrieval task due to its superior retrieval efficiency and low storage cost. However, most existing cross-modal hashing methods learn binary codes directly from multimedia data, which cannot fully utilize the semantic knowledge of the data. Furthermore, they cannot learn the ranking based similarity relevance of data points with multi-label. And they usually use a relax constraint of hash code which causes non-negligible quantization loss in the optimization. In this paper, a hashing method called Deep Adversarial Discrete Hashing (DADH) is proposed to address these issues for cross-modal retrieval. The proposed method uses adversarial training to learn features across modalities and ensure the distribution consistency of feature representations across modalities. We also introduce a weighted cosine triplet constraint which can make full use of semantic knowledge from the multi-label to ensure the precise ranking relevance of item pairs. In addition, we use a discrete hashing strategy to learn the discrete binary codes without relaxation, by which the semantic knowledge from label in the hash codes can be preserved while the quantization loss can be minimized. Ablation experiments and comparison experiments on two cross-modal databases show that the proposed DADH improves the performance and outperforms several state-of-the-art hashing methods for cross-modal retrieval.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要