SIFiD: Reassess Summary Factual Inconsistency Detection with LLM
arxiv(2024)
摘要
Ensuring factual consistency between the summary and the original document is
paramount in summarization tasks. Consequently, considerable effort has been
dedicated to detecting inconsistencies. With the advent of Large Language
Models (LLMs), recent studies have begun to leverage their advanced language
understanding capabilities for inconsistency detection. However, early attempts
have shown that LLMs underperform traditional models due to their limited
ability to follow instructions and the absence of an effective detection
methodology. In this study, we reassess summary inconsistency detection with
LLMs, comparing the performances of GPT-3.5 and GPT-4. To advance research in
LLM-based inconsistency detection, we propose SIFiD (Summary Inconsistency
Detection with Filtered Document) that identify key sentences within documents
by either employing natural language inference or measuring semantic similarity
between summaries and documents.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要