Performance comparisons of phrase sets and presentation styles for text entry evaluations.

IUI '12: 17th International Conference on Intelligent User Interfaces Lisbon Portugal February, 2012(2012)

引用 47|浏览3
暂无评分
摘要
We empirically compare five different publicly-available phrase sets in two large-scale (N = 225 and N = 150) crowdsourced text entry experiments. We also investigate the impact of asking participants to memorize phrases before writing them versus allowing participants to see the phrase during text entry. We find that asking participants to memorize phrases increases entry rates at the cost of slightly increased error rates. This holds for both a familiar and for an unfamiliar text entry method. We find statistically significant differences between some of the phrase sets in terms of both entry and error rates. Based on our data, we arrive at a set of recommendations for choosing suitable phrase sets for text entry evaluations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要