Discriminative Training Of Decoding Graphs For Large Vocabulary Continuous Speech Recognition

ICASSP (4)(2007)

引用 34|浏览43
暂无评分
摘要
Finite-state decoding graphs integrate the decision trees, pronunciation model and language model for speech recognition into a unified representation of the search space. We explore discriminative training of the transition weights in the decoding graph in the context of large vocabulary speech recognition. In preliminary experiments on the RT-03 English Broadcast News evaluation set, the word error rate was reduced by about 5.7% relative, from 23.0% to 21.7%. We discuss how this method is particularly applicable to low-latency and low-resource applications such as real-time closed captioning of broadcast news and interactive speech-to-speech translation.
更多
查看译文
关键词
discriminative training,finite-state decoding graph,language model,pronunciation model,low-resource speech recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要