Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
arxiv(2023)
摘要
We investigate the internal behavior of Transformer-based Large Language
Models (LLMs) when they generate factually incorrect text. We propose modeling
factual queries as constraint satisfaction problems and use this framework to
investigate how the LLM interacts internally with factual constraints. We find
a strong positive relationship between the LLM's attention to constraint tokens
and the factual accuracy of generations. We curate a suite of 10 datasets
containing over 40,000 prompts to study the task of predicting factual errors
with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe,
a method probing attention patterns, that can predict factual errors and
fine-grained constraint satisfaction, and allow early error identification. The
approach and findings take another step towards using the mechanistic
understanding of LLMs to enhance their reliability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要