Learning to model domain-specific utterance sequences for extractive summarization of contact center dialogues

COLING (Posters)(2010)

引用 25|浏览10
暂无评分
摘要
This paper proposes a novel extractive summarization method for contact center dialogues. We use a particular type of hidden Markov model (HMM) called Class Speaker HMM (CSHMM), which processes operator/caller utterance sequences of multiple domains simultaneously to model domain-specific utterance sequences and common (domain-wide) sequences at the same time. We applied the CSHMM to call summarization of transcripts in six different contact center domains and found that our method significantly outperforms competitive baselines based on the maximum coverage of important words using integer linear programming.
更多
查看译文
关键词
class speaker hmm,important word,different contact center domain,hidden markov model,novel extractive summarization method,integer linear programming,caller utterance sequence,competitive baselines,contact center dialogue,domain-specific utterance sequence,linear program
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要