ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs
arxiv(2024)
摘要
In the midst of widespread misinformation and disinformation through social
media and the proliferation of AI-generated texts, it has become increasingly
difficult for people to validate and trust information they encounter. Many
fact-checking approaches and tools have been developed, but they often lack
appropriate explainability or granularity to be useful in various contexts. A
text validation method that is easy to use, accessible, and can perform
fine-grained evidence attribution has become crucial. More importantly,
building user trust in such a method requires presenting the rationale behind
each prediction, as research shows this significantly influences people's
belief in automated systems. It is also paramount to localize and bring users'
attention to the specific problematic content, instead of providing simple
blanket labels. In this paper, we present ClaimVer, a human-centric
framework tailored to meet users' informational and verification needs by
generating rich annotations and thereby reducing cognitive load. Designed to
deliver comprehensive evaluations of texts, it highlights each claim, verifies
it against a trusted knowledge graph (KG), presents the evidence, and provides
succinct, clear explanations for each claim prediction. Finally, our framework
introduces an attribution score, enhancing applicability across a wide range of
downstream tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要