Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies.

UIST '18: The 31st Annual ACM Symposium on User Interface Software and Technology Berlin Germany October, 2018(2018)

引用 28|浏览52
暂无评分
摘要
End-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms. In addition to our own analysis, we asked six expert researchers with experience running and analyzing elicitation studies to analyze an end-user elicitation dataset of 10 functions for operating a web-browser, each with 43 voice commands elicited from end-users for a total of 430 voice commands. We used Crowdsensus to gather similarity judgments of these same 430 commands from 410 online crowd workers. The crowd outperformed the experts by arriving at the same results for seven of eight functions and resolving a function where the experts failed to agree. Also, using Crowdsensus was about four times faster than using experts.
更多
查看译文
关键词
End-user elicitation study, agreement rate, online crowds, crowdsourcing, human computation, Mechanical Turk
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要