Evaluating Retrieval Over Sessions: The Trec Session Track 2011-2014

SIGIR '16: The 39th International ACM SIGIR conference on research and development in Information Retrieval Pisa Italy July, 2016(2016)

引用 60|浏览30
暂无评分
摘要
Information Retrieval (IR) research has traditionally focused on serving the best results for a single query so-called ad hoc retrieval. However, users typically search iteratively, refining and reformulating their queries during a session. A key challenge in the study of this interaction is the creation of suitable evaluation resources to assess the effectiveness of IR systems over sessions. This paper describes the TREC Session Track, which ran from 2010 through to 2014, which focussed on forming test collections that included various forms of implicit feedback. We describe the test collections; a brief analysis of the differences between datasets over the years; and the evaluation results that demonstrate that the use of user session data significantly improved effectiveness.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要