Multi-Scale Context Adaptation For Improving Child Automatic Speech Recognition In Child-Adult Spoken Interactions

18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION(2017)

引用 9|浏览18
暂无评分
摘要
The mutual influence of participant behavior in a dyadic interaction has been studied for different modalities and quantified by computational models. In this paper, we consider the task of automatic recognition for children's speech, in the context of child-adult spoken interactions during interviews of children suspected to have been maltreated. Our long-term goal is to provide insights within this immensely important, sensitive domain through large-scale lexical and paralinguistic analysis. We demonstrate improvement in child speech recognition accuracy by conditioning on both the domain and the interlocutor's (adult) speech. Specifically, we use information from the automatic speech recognizer outputs of the adult's speech, for which we have more reliable estimates, to modify the recognition system of child's speech in an unsupervised manner. By learning first at session level, and then at the utterance level, we demonstrate an absolute improvement of upto 28% WER and 55% perplexity over the baseline results. We also report results of a parallel human speech recognition (HSR) experiment where annotators are asked to transcribe child's speech under two conditions: with and without contextual speech information. Demonstrated ASR improvements and the HSR experiment illustrate the importance of context in aiding child speech recognition, whether by humans or computers.
更多
查看译文
关键词
dyadic interaction, children's speech
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要