SPECTRUM: Speaker-Enhanced Pre-Training for Long Dialogue Summarization
CoRR(2024)
摘要
Multi-turn dialogues are characterized by their extended length and the
presence of turn-taking conversations. Traditional language models often
overlook the distinct features of these dialogues by treating them as regular
text. In this paper, we propose a speaker-enhanced pre-training method for long
dialogue summarization, which leverages the inherent structure of multiple-turn
dialogues. To support our study, we curate a diverse dataset that includes
transcripts from real-world scenarios, movie or TV show transcripts, and
dialogues generated by a Large Language Model. We then perform a pre-training,
which encompasses the detection of speaker changes, and masked utterance
generation. Experimental results of fine-tuned models demonstrate that our
model achieves state-of-the-art performance on downstream benchmarks with long
context, surpassing baseline models and highlighting the effectiveness of our
approach. Our findings highlight the importance of curating pre-training
datasets that exhibit diversity and variations in length distribution to ensure
effective alignment with downstream datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要