Multichannel ASR with Knowledge Distillation and Generalized Cross Correlation Feature.

SLT(2018)

引用 1|浏览11
暂无评分
摘要
Multi-channel signal processing techniques have played an important role in the far-field automatic speech recognition (ASR) as the separate front-end enhancement part. However, they often meet the mismatch problem. In this paper, we proposed a novel architecture of acoustic model, in which the multi-channel speech without preprocessing was utilized directly. Besides the strategy of knowledge distillation and the generalized cross correlation (GCC) adaptation were employed. We use knowledge distillation to transfer knowledge from a well-trained close-talking model to distant-talking scenarios in every frame of the multichannel distant speech. Moreover, the GCC between microphones, which contains the spatial information, is supplied as an auxiliary input to the neural network. We observe good compensation of those two techniques. Evaluated with the AMI and ICSI meeting corpora, the proposed methods achieve relative WER improvement of 7.7% and 7.5% over the model trained directly on the concatenated multi-channel speech.
更多
查看译文
关键词
Microphones,Data models,Adaptation models,Neural networks,Speech recognition,Training,Correlation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要