Encoder-decoder with Focus-mechanism for Sequence Labelling Based Spoken Language Understanding

2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2016)

引用 78|浏览117
暂无评分
摘要
This paper investigates the framework of encoder-decoder with attention for sequence labelling based Spoken Language Understanding. We introduce BLSTM-LSTM as the encoder-decoder model to fully utilize the power of deep learning. In the sequence labelling task, the input and output sequences are aligned word by word, while the attention mechanism can't provide the exact alignment. To address the limitations of attention mechanism in the sequence labelling task, we propose a novel focus mechanism. Experiments on the standard ATIS dataset showed that BLSTM-LSTM with focus mechanism defined the new state-of-the-art by outperforming standard BLSTM and attention based encoder-decoder. Further experiments also showed that the proposed model is more robust to speech recognition errors.
更多
查看译文
关键词
Spoken language understanding,encoder-decoder,focus-mechanism,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要