Simulating Simple And Fallible Relevance Feedback

ECIR 2011: Proceedings of the 33rd European Conference on Advances in Information Retrieval - Volume 6611(2011)

引用 14|浏览56
暂无评分
摘要
Much of the research in relevance feedback (RF) has been performed under laboratory conditions using test collections and either test persons or simple simulation. These studies have given mixed results. The design of the present study is unique. First, the initial queries are realistically short queries generated by real end-users. Second, we perform a user simulation with several RF scenarios. Third, we simulate human fallibility in providing RF, i.e.. incorrectness in feedback. Fourth, we employ graded relevance assessments in the evaluation of the retrieval results. The research question is: how does RF affect IR performance when initial queries are short and feedback is fallible? Our findings indicate that very fallible feedback is no different from pseudorelevance feedback (PRF) and not effective on short initial queries. However, RF with empirically observed fallibility is as effective as correct RF and able to improve the performance of short initial queries.
更多
查看译文
关键词
Relevance feedback,fallibility,simulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要