Improving Keyword Spotting and Language Identification via Neural Architecture Search at Scale

INTERSPEECH(2019)

引用 35|浏览8
暂无评分
摘要
In this paper we present a novel Neural Architecture Search (NAS) framework to improve keyword spotting and spoken language identification models. Even with the huge success of deep neural networks (DNNs) in many different domains, finding the best network architecture is still a laborious task and very computationally expensive at best with existing searching approaches. Our search approach efficiently and robustly finds better model sequences with respect to hand-designed systems. We do this by constructing architectures incrementally, using a custom mutation algorithm and leveraging the power of parameter transfer between layers. We demonstrate that our approach can automatically design DNNs with an order of magnitude fewer parameters that achieves better performance than the current best models. It leads to significant performance improvements: up to 4.09% accuracy increase for language identification (6.1% if we allow an increase in the number of parameters) and 0.3% for phoneme classification in keyword spotting with half the size of the model.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要