Explicit In Situ User Feedback For Web Search Results

SIGIR '16: The 39th International ACM SIGIR conference on research and development in Information Retrieval Pisa Italy July, 2016(2016)

引用 19|浏览54
暂无评分
摘要
Gathering evidence about whether a search result is relevant is a core concern in the evaluation and improvement of information retrieval systems. Two common sources of evidence for establishing relevance are judgements from trained assessors and logs of online user behavior. However, both are limited; it is hard for a trained assessor to know exactly what users want to find, and user behavior only provides an implicit and ambiguous signal. In this paper, we aim to address these limitations by collecting explicit feedback on web search results from users in situ as they search. When users return to the search result page via the browser back button after having clicked on a result, we ask them to provide a binary thumbs up or thumbs down judgment and text feedback. We collect in situ feedback from a large commercial search engine, and compare this feedback with the judgments provided by trained assessors. We find that in situ feedback differs significantly from traditional relevance judgments, and that it suggests a different interpretation of behavior signals, with the dwell time threshold between negative and positive in situ feedback being 87 seconds, longer than the more common heuristic of 30 seconds. Using text feedback from users, we discuss why user feedback may differ from editorial judgments.
更多
查看译文
关键词
IR evaluation,explicit user feedback,web search
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要