On the Limitations of Large Language Models (LLMs): False Attribution
arxiv(2024)
摘要
In this work, we provide insight into one important limitation of large
language models (LLMs), i.e. false attribution, and introduce a new
hallucination metric - Simple Hallucination Index (SHI). The task of automatic
author attribution for relatively small chunks of text is an important NLP task
but can be challenging. We empirically evaluate the power of 3 open SotA LLMs
in zero-shot setting (LLaMA-2-13B, Mixtral 8x7B, and Gemma-7B), especially as
human annotation can be costly. We collected the top 10 most popular books,
according to Project Gutenberg, divided each one into equal chunks of 400
words, and asked each LLM to predict the author. We then randomly sampled 162
chunks for human evaluation from each of the annotated books, based on the
error margin of 7
chunks (Great Expectations by Charles Dickens, having 922 chunks). The average
results show that Mixtral 8x7B has the highest prediction accuracy, the lowest
SHI, and a Pearson's correlation (r) of 0.737, 0.249, and -0.9996,
respectively, followed by LLaMA-2-13B and Gemma-7B. However, Mixtral 8x7B
suffers from high hallucinations for 3 books, rising as high as an SHI of 0.87
(in the range 0-1, where 1 is the worst). The strong negative correlation of
accuracy and SHI, given by r, demonstrates the fidelity of the new
hallucination metric, which is generalizable to other tasks. We publicly
release the annotated chunks of data and our codes to aid the reproducibility
and evaluation of other models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要