Research on Acceleration Method of Speech Recognition Training.

ADVANCED COMPUTER ARCHITECTURE(2018)

引用 0|浏览17
暂无评分
摘要
Recurrent Neural Network (RNN) is now widely used in speech recognition. Experiments show that it has significant advantages over traditional methods, but complex computation limits its application, especially in real-time application scenarios. Recurrent neural network is heavily dependent on the pre- and post-state in calculation process, and there is much overlap information, so overlapping information can be reduced to accelerate training. This paper construct a training acceleration structure, which reduces the computation cost and accelerates training speed by discarding the dependence of pre-and poststate of RNN. Then correcting the recognition results errors with text corrector. We verify the proposed method on the TIMIT and Librispeech datasets, which prove that this approach achieves about 3 times speedup with little relative accuracy reduction.
更多
查看译文
关键词
Speech recognition,Accelerating training,Text correction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要