The Impact of Summaries: What Makes a User Click?

msra(2010)

引用 24|浏览7
暂无评分
摘要
Modern retrieval systems are in fact two-tier systems in which a user first views summaries of the results in a hit-list, and only when she decides to "click," the full result docu- ment is consulted. Standard information retrieval evaluation ignores the crucial summary step, and directly evaluates in terms of the relevance of the resulting document. In this paper, we investigate the impact of the result summaries on the user's decision to click or not to click. Specifically, we want to find out both what information in the summary trig- gers a positive selection decision to view a result, and what information triggers a negative selection decision. We use a special document genre, archival finding aids, where results have a complex document structure and currently available systems experiment with structured summaries having both static elements (like the title and a manually compiled ab- stract by an archivist) and query-biased snippets (showing the matching keywords in context). We conducted an ex- periment in which we asked test persons to explicitly mark the parts of summaries that trigger a selection decision, and asked them to explain further (i.e. why and how). The re- sults from this user study indicate the importance of su- cient context in the summary. Selection decisions were pri- marily based on the static elements: the title and abstract of the document. This may be a result of the completeness and coherence of the information in these elements, although also the length played a clear role. A whole paragraph (as in the abstract) triggered a decision more frequently than a short sentence (as in the title) or an incomplete sentence (as in the query-biased snippets). cial summary step and directly evaluates in terms of the relevance of the resulting document. Turpin et al. (20), in their study of including summaries in system evaluation, re- vealed that summaries need to be evaluated in addition to the document when constructing a test collection. In their experiment, in which users were asked to provide relevance assessments of both summaries and documents, 14% of the highly relevant and 31% of relevant documents were never examined by the users because the summary was judged ir- relevant. This shows that the document summary presented by a retrieval system does not always accurately reflect the document content. Since summaries evaluation is the first selection moment for the users, this could results in users missing out some relevant documents. In this paper, our main aim is to investigate the impact of the summaries of documents on a user's decision to either click or not. Specifically, we investigate the following two research questions:
更多
查看译文
关键词
negative selection,positive selection,document structure
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要