Enhanced Discrete Multi-Modal Hashing: More Constraints Yet Less Time to Learn

IEEE Transactions on Knowledge and Data Engineering(2022)

引用 15|浏览114
暂无评分
摘要
Due to the exponential growth of multimedia data, multi-modal hashing as a promising technique to make cross-view retrieval scalable is attracting more and more attention. However, most of the existing multi-modal hashing methods either divide the learning process unnaturally into two separate stages or treat the discrete optimization problem simplistically as a continuous one, which leads to suboptimal results. Recently, a few discrete multi-modal hashing methods that try to address such issues have emerged, but they still ignore several important discrete constraints (such as the balance and decorrelation of hash bits). In this paper, we overcome those limitations by proposing a novel method named “Enhanced Discrete Multi-modal Hashing (EDMH)” which learns binary codes and hashing functions simultaneously from the pairwise similarity matrix of data, under the aforementioned discrete constraints. Although the model of EDMH looks a lot more complex than the other models for multi-modal hashing, we are actually able to develop a fast iterative learning algorithm for it, since the subproblems of its optimization all have closed-form solutions after introducing a couple of auxiliary variables. Our experimental results on three real-world datasets have revealed the usefulness of those previously ignored discrete constraints and demonstrated that EDMH not only performs much better than state-of-the-art competitors according to several retrieval metrics but also runs much faster than most of them.
更多
查看译文
关键词
Learning to hash,discrete optimization,semantics alignment,cross-view retrieval
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要