Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models – A Survey
arxiv(2024)
摘要
Large language models (LLMs) have recently shown impressive performance on
tasks involving reasoning, leading to a lively debate on whether these models
possess reasoning capabilities similar to humans. However, despite these
successes, the depth of LLMs' reasoning abilities remains uncertain. This
uncertainty partly stems from the predominant focus on task performance,
measured through shallow accuracy metrics, rather than a thorough investigation
of the models' reasoning behavior. This paper seeks to address this gap by
providing a comprehensive review of studies that go beyond task accuracy,
offering deeper insights into the models' reasoning processes. Furthermore, we
survey prevalent methodologies to evaluate the reasoning behavior of LLMs,
emphasizing current trends and efforts towards more nuanced reasoning analyses.
Our review suggests that LLMs tend to rely on surface-level patterns and
correlations in their training data, rather than on genuine reasoning
abilities. Additionally, we identify the need for further research that
delineates the key differences between human and LLM-based reasoning. Through
this survey, we aim to shed light on the complex reasoning processes within
LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要