Negative Guided Abstractive Dialogue Summarization

Conference of the International Speech Communication Association (INTERSPEECH)(2022)

引用 0|浏览19
暂无评分
摘要
The goal of the abstractive dialogue summarization task is to generate a shorter form of a long conversation while retaining its most salient information, which plays an important role in speech. Unlike the well-structured text, such as scientific articles and news, dialogues often comprise of utterances coming from multiple interlocutors, where the conversations are often informal, verbose, repetitive, and sprinkled with false-starts, backchanneling, reconfirmations, hesitations as well as speaker interruptions, which might introduce much noisy information and thus brings new challenges of summarizing dialogues. In this work, we extend the widely-used sequence-to-sequence summarization framework with a negative guided mechanism, which allows models to explicitly perceive the unnecessary pieces (i.e., noise) of a dialogue and thus focus more on the salient information. Specifically, the negative guided mechanism has two main components, negative example construction and negative guided loss. We explore two different ways to constructing the negative examples and further calculate the negative loss. Extensive experiments on the benchmark datasets demonstrate that our method significantly outperforms the baselines with regard to both semantic matching and factual consistent based metrics. We also elicit the human efforts to prove the performance gains.
更多
查看译文
关键词
dialogue
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要