Recurrent Neural Network Language Model Adaptation for Multi-Genre Broadcast Speech Recognition and Alignment.

IEEE/ACM Transactions on Audio, Speech, and Language Processing(2019)

引用 25|浏览64
暂无评分
摘要
Recurrent neural network language models (RNNLMs) generally outperform n-gram language models when used in automatic speech recognition (ASR). Adapting RNNLMs to new domains is an open problem and current approaches can be categorised as either feature-based or model based. In feature-based adaptation, the input to the RNNLM is augmented with auxiliary features whilst model-based adaptation includ...
更多
查看译文
关键词
Adaptation models,Training,Speech recognition,Context modeling,Data models,Speech processing,Task analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要