In Search of Truth: An Interrogation Approach to Hallucination Detection

Yakir Yehuda,Itzik Malkiel,Oren Barkan, Jonathan Weill, Royi Ronen,Noam Koenigstein

arxiv(2024)

引用 0|浏览5
暂无评分
摘要
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adoption is the occurrence of hallucinations, where LLMs invent answers that sound realistic, yet drift away from factual truth. In this paper, we present a novel method for detecting hallucinations in large language models, which tackles a critical issue in the adoption of these models in various real-world scenarios. Through extensive evaluations across multiple datasets and LLMs, including Llama-2, we study the hallucination levels of various recent LLMs and demonstrate the effectiveness of our method to automatically detect them. Notably, we observe up to 62 Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy (B-ACC) of 87
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要