Knowledge-Grounded Response Generation With Deep Attentional Latent-Variable Model

Hao-Tong Ye, Kai-Ling Lo,Shang-Yu Su,Yun-Nung Chen

Computer Speech & Language(2020)

引用 20|浏览0
暂无评分
摘要
End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific to each task and corpus. However, one of the fatal drawbacks in such approaches is that they are unable to generate informative utterances, so it limits their usage from some real-world conversational applications. In order to tackle this issue, this paper attempts to generate diverse and informative responses with a variational generation model, which contains a joint attention mechanism conditioning on the information from both dialogue contexts and extra knowledge. The experiments on benchmark DSTC7 data show that the proposed method generates responses with more grounded knowledge and improve the diversity of generated language. (c) 2020 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Knowledge-grounded,Response generation,Variational model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要