Discriminative training of context-dependent language model scaling factors and interpolation weights

2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)(2015)

引用 1|浏览76
暂无评分
摘要
We demonstrate how context-dependent language model scaling factors and interpolation weights can be unified in a single formulation where free parameters are discriminatively trained using linear and non-linear optimization. Objective functions of the optimization are defined based on pairs of superior and inferior recognition hypotheses and correlate well with recognition error metrics. Experiments on a large, real world application demonstrated the effectiveness of the solution in significantly reducing recognition errors, by leveraging the benefits of both context-dependent weighting and discriminative training.
更多
查看译文
关键词
discriminative training,language model factor,interpolation,context dependent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要