Not-quite-naïve listeners: Students as an audience for gamified crowdsourcing

Journal of the Acoustical Society of America(2017)

引用 0|浏览19
暂无评分
摘要
Collecting independent listeners’ judgments of speech accuracy/intelligibility is an essential component of research on speech disorders. Raters can be trained clinicians, students in speech-language pathology, or naive listeners (now commonly recruited online via crowdsourcing platforms such as Amazon Mechanical Turk/AMT). However, limited comparison data exist to guide researchers in determining which rater population to use. We describe a study (Hitchcock et al., in prep) in which 2,256 tokens of English /r/ at the word level, produced by five children receiving intervention for /r/ misarticulation, were rated in a binary fashion using the online platform Experigen. Raters were certified clinicians (n = 3), students in speech-language pathology (n = 9 unique listeners per token), or naive listeners recruited on AMT (n = 9 unique listeners per token). Interrater reliability was higher when comparing modal ratings between clinicians and students (Cohen’s kappa=.73, CI=.7-.77) than between clinicians and ...
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要