A Deep Learning Approach for Selective Relevance Feedback
CoRR(2024)
摘要
Pseudo-relevance feedback (PRF) can enhance average retrieval effectiveness
over a sufficiently large number of queries. However, PRF often introduces a
drift into the original information need, thus hurting the retrieval
effectiveness of several queries. While a selective application of PRF can
potentially alleviate this issue, previous approaches have largely relied on
unsupervised or feature-based learning to determine whether a query should be
expanded. In contrast, we revisit the problem of selective PRF from a deep
learning perspective, presenting a model that is entirely data-driven and
trained in an end-to-end manner. The proposed model leverages a
transformer-based bi-encoder architecture. Additionally, to further improve
retrieval effectiveness with this selective PRF approach, we make use of the
model's confidence estimates to combine the information from the original and
expanded queries. In our experiments, we apply this selective feedback on a
number of different combinations of ranking and feedback models, and show that
our proposed approach consistently improves retrieval effectiveness for both
sparse and dense ranking models, with the feedback models being either sparse,
dense or generative.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要