A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks.

HCOMP (Works in Progress / Demos)(2013)

引用 29|浏览10
暂无评分
摘要
This paper describes an approach to improving the reliability of a crowdsourced labeling task for which there is no objective right answer. Our approach focuses on three contingent elements of the labeling task: data quality, worker reliability, and task design. We describe how we developed and applied this framework to the task of labeling tweets according to their interestingness. We use in-task CAPTCHAs to identify unreliable workers, and measure inter-rater agreement to decide whether subtasks have objective or merely subjective answers.
更多
查看译文
关键词
crowdsourcing,experimental design,captcha
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要