Understanding and Improving Neural Ranking Models from a Term Dependence View.

AIRS(2019)

引用 0|浏览124
暂无评分
摘要
Recently, neural information retrieval (NeuIR) has attracted a lot of interests, where a variety of neural models have been proposed for the core ranking problem. Beyond the continuous refresh of the state-of-the-art neural ranking performance, the community calls for more analysis and understanding of the emerging neural ranking models. In this paper, we attempt to analyze these new models from a traditional view, namely term dependence. Without loss of generality, most existing neural ranking models could be categorized into three categories with respect to their underlying assumption on query term dependence, i.e., independent models, dependent models, and hybrid models. We conduct rigorous empirical experiments over several representative models from these three categories on a benchmark dataset and a large click-through dataset. Interestingly, we find that no single type of model can achieve a consistent win over others on different search queries. An oracle model which can select the right model for each query can obtain significant performance improvement. Based on the analysis we introduce an adaptive strategy for neural ranking models. We hypothesize that the term dependence in a query could be measured through the divergence between its independent and dependent representations. We thus propose a dependence gate based on such divergence representation to softly select neural ranking models for each query accordingly. Experimental results verify the effectiveness of the adaptive strategy.
更多
查看译文
关键词
Understanding, Term dependence, Query adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要