Complementing text entry evaluations with a composition task

ACM Trans. Comput.-Hum. Interact.(2014)

引用 64|浏览19
暂无评分
摘要
A common methodology for evaluating text entry methods is to ask participants to transcribe a predefined set of memorable sentences or phrases. In this article, we explore if we can complement the conventional transcription task with a more externally valid composition task. In a series of large-scale crowdsourced experiments, we found that participants could consistently and rapidly invent high quality and creative compositions with only modest reductions in entry rates. Based on our series of experiments, we provide a best-practice procedure for using composition tasks in text entry evaluations. This includes a judging protocol which can be performed either by the experimenters or by crowdsourced workers on a microtask market. We evaluated our composition task procedure using a text entry method unfamiliar to participants. Our empirical results show that the composition task can serve as a valid complementary text entry evaluation method.
更多
查看译文
关键词
complementing text entry evaluation,text entry method,composition task,conventional transcription task,creative composition,text entry evaluation,entry rate,externally valid composition task,composition task procedure,best-practice procedure,valid complementary text entry,transcription,composition,crowdsourcing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要