Generating Zero-shot Abstractive Explanations for Rumour Verification
arxiv(2024)
摘要
The task of rumour verification in social media concerns assessing the
veracity of a claim on the basis of conversation threads that result from it.
While previous work has focused on predicting a veracity label, here we
reformulate the task to generate model-centric free-text explanations of a
rumour's veracity. The approach is model agnostic in that it generalises to any
model. Here we propose a novel GNN-based rumour verification model. We follow a
zero-shot approach by first applying post-hoc explainability methods to score
the most important posts within a thread and then we use these posts to
generate informative explanations using opinion-guided summarisation. To
evaluate the informativeness of the explanatory summaries, we exploit the
few-shot learning capabilities of a large language model (LLM). Our experiments
show that LLMs can have similar agreement to humans in evaluating summaries.
Importantly, we show explanatory abstractive summaries are more informative and
better reflect the predicted rumour veracity than just using the highest
ranking posts in the thread.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要