Cross-modal knowledge guided model for abstractive summarization

COMPLEX & INTELLIGENT SYSTEMS(2024)

引用 0|浏览10
暂无评分
摘要
ive summarization (AS) aims to generate more flexible and informative descriptions than extractive summarization. Nevertheless, it often distorts or fabricates facts in the original article. To address this problem, some existing approaches attempt to evaluate or verify factual consistency, or design models to reduce factual errors. However, most of the efforts either have limited effects or result in lower rouge scores while reducing factual errors. In other words, it is challenging to promote factual consistency while maintaining the informativeness of generated summaries. Inspired by the knowledge graph embedding technique, in this paper, we propose a novel cross-modal knowledge guided model (CKGM) for AS, which embeds a multimodal knowledge graph (MKG) combining image entity-relationship information and textual factual information (FI) into BERT to accomplish cross-modal information interaction and knowledge expansion. The pre-training method obtains rich contextual semantic information, while the knowledge graph supplements the textual information. In addition, an entity memory embedding algorithm is further proposed to improve information fusion efficiency and model training speed. We elaborately conducted ablation experiments and evaluated our model on the Visual Genome, FewRel, MSCOCO, and CNN/DailyMail datasets. Experimental results demonstrate that our model can significantly improve the FI consistency and informativeness of generated summaries.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要