Transfer Learning of Transformers for Spoken Language Understanding

TEXT, SPEECH, AND DIALOGUE (TSD 2022)(2022)

引用 1|浏览1
暂无评分
摘要
Pre-trained models used in the transfer-learning scenario are recently becoming very popular. Such models benefit from the availability of large sets of unlabeled data. Two kinds of such models include the Wav2Vec 2.0 speech recognizer and T5 text-to-text transformer. In this paper, we describe a novel application of such models for dialog systems, where both the speech recognizer and the spoken language understanding modules are represented as Transformer models. Such composition outperforms the baseline based on the DNN-HMM speech recognizer and CNN understanding.
更多
查看译文
关键词
Wav2Vec model, Speech recognition, T5 model, Spoken language understanding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要