Exploring neural transducers for end-to-end speech recognition

2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2017)

引用 245|浏览217
暂无评分
摘要
In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.
更多
查看译文
关键词
Hub500 benchmark,CTC models,speech recognition pipeline,RNN-Transducer models,language model,Seq2Seq models,end-to-end speech recognition,neural transducers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要