Can LLMs Produce Faithful Explanations For Fact-checking? Towards Faithful Explainable Fact-Checking via Multi-Agent Debate
CoRR(2024)
摘要
Fact-checking research has extensively explored verification but less so the
generation of natural-language explanations, crucial for user trust. While
Large Language Models (LLMs) excel in text generation, their capability for
producing faithful explanations in fact-checking remains underexamined. Our
study investigates LLMs' ability to generate such explanations, finding that
zero-shot prompts often result in unfaithfulness. To address these challenges,
we propose the Multi-Agent Debate Refinement (MADR) framework, leveraging
multiple LLMs as agents with diverse roles in an iterative refining process
aimed at enhancing faithfulness in generated explanations. MADR ensures that
the final explanation undergoes rigorous validation, significantly reducing the
likelihood of unfaithful elements and aligning closely with the provided
evidence. Experimental results demonstrate that MADR significantly improves the
faithfulness of LLM-generated explanations to the evidence, advancing the
credibility and trustworthiness of these explanations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要