Exploiting Background Knowledge in Compact Answer Generation for Why-questions

THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE(2019)

引用 26|浏览35
暂无评分
摘要
This paper proposes a novel method for generating compact answers to open-domain why-questions, such as the following answer, "Because deep learning technologies were introducer to the question, "Why did Google's machine translation service improve so drastically'?" Although many works have dealt with why-question answering, most have focused on retrieving as answers relatively long text passages that consist of several sentences. Because of their length, such passages are not appropriate to be read aloud by spoken dialog systems and smart speakers; hence, we need to create a method that generates compact answers. We developed a novel neural summarizer for this compact answer generation task, It combines a recurrent neural network-based encoder decoder model with stacked convolutional neural networks and was designed to effectively exploit background knowledge, in this case a set of causal relations (e.g., "[Microsoft's machine translation has made great progress over the last few years](effect) since [it started to use deep learning.](cause)") that was extracted from a large web data archive (4 billion web pages). Our experimental results show that our method achieved significantly better ROUGE F-scores than existing encoder-decoder models and their variations that were augmented with query-attention and memory networks, which are used to exploit the background knowledge.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要