Joint training of interpolated exponential n-gram models.

ASRU(2013)

引用 6|浏览88
暂无评分
摘要
For many speech recognition tasks, the best language model performance is achieved by collecting text from multiple sources or domains, and interpolating language models built separately on each individual corpus. When multiple corpora are available, it has also been shown that when using a domain adaptation technique such as feature augmentation [1], the performance on each individual domain can be improved by training a joint model across all of the corpora. In this paper, we explore whether improving each domain model via joint training also improves performance when interpolating the models together. We show that the diversity of the individual models is an important consideration, and propose a method for adjusting diversity to optimize overall performance. We present results using word n-gram models and Model M, a class-based n-gram model, and demonstrate improvements in both perplexity and word-error rate relative to state-of-the-art results on a Broadcast News transcription task.
更多
查看译文
关键词
interpolation,speech recognition,text analysis,Broadcast News transcription task,best language model performance,class-based n-gram model,domain adaptation technique,feature augmentation,individual corpus,interpolated exponential n-gram models,joint training,model M,multiple corpora,speech recognition tasks,word n-gram models,word-error rate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要