Exploring neural network architectures for acoustic modeling.

dblp(2017)

引用 23|浏览9
暂无评分
摘要
Deep neural network (DNN)-based acoustic models (AMs) have significantly improved automatic speech recognition (ASR) on many tasks. However, ASR performance still suffers from speaker and environment variability, especially under low-resource, distant microphone, noisy, and reverberant conditions. The goal of this thesis is to explore novel neural architectures that can effectively improve ASR performance. In the first part of the thesis, we present a well-engineered, efficient open-source framework to enable the creation of arbitrary neural networks for speech recognition. We first design essential components to simplify the creation of a neural network with recurrent loops. Next, we propose several algorithms to speed up neural network training based on this framework. We demonstrate the flexibility and scalability of the toolkit across different benchmarks. In the second part of the thesis, we propose several new neural models to reduce ASR word error rates (WERs) using the toolkit we created. First, we formulate a new neural architecture loosely inspired by humans to process low-resource languages. Second, we demonstrate a way to enable very deep neural network models by adding more non-linearities and expressive power while keeping the model optimizable and generalizable. Experimental results demonstrate that our approach outperforms several ASR baselines and model variants, yielding a 10% relative WER gain. Third, we incorporate these techniques into an end-to-end recognition model. We experiment with the Wall Street Journal ASR task and achieve 10.5% WER without any dictionary or language model, an 8.5% absolute improvement over the best published result. Thesis Supervisor: James R. Glass Title: Senior Research Scientist
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要