GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence
CoRR(2024)
摘要
LLMs can generate factually incorrect statements even when provided access to
reference documents. Such errors can be dangerous in high-stakes applications
(e.g., document-grounded QA for healthcare or finance). We present GenAudit –
a tool intended to assist fact-checking LLM responses for document-grounded
tasks. GenAudit suggests edits to the LLM response by revising or removing
claims that are not supported by the reference document, and also presents
evidence from the reference for facts that do appear to have support. We train
models to execute these tasks, and design an interactive interface to present
suggested edits and evidence to users. Comprehensive evaluation by human raters
shows that GenAudit can detect errors in 8 different LLM outputs when
summarizing documents from diverse domains. To ensure that most errors are
flagged by the system, we propose a method that can increase the error recall
while minimizing impact on precision. We will release our tool (GenAudit) and
fact-checking model for public use.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要