GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
CoRR(2024)
摘要
Large language models (LLMs) have achieved impressive performance across
various mathematical reasoning benchmarks. However, there are increasing
debates regarding whether these models truly understand and apply mathematical
knowledge or merely rely on shortcuts for mathematical reasoning. One essential
and frequently occurring evidence is that when the math questions are slightly
changed, LLMs can behave incorrectly. This motivates us to evaluate the
robustness of LLMs' math reasoning capability by testing a wide range of
question variations. We introduce the adversarial grade school math
() dataset, an extension of GSM8K augmented with various
mathematical perturbations. Our experiments on 25 LLMs and 4 prompting
techniques show that while LLMs exhibit different levels of math reasoning
abilities, their performances are far from robust. In particular, even for
problems that have been solved in GSM8K, LLMs can make mistakes when new
statements are added or the question targets are altered. We also explore
whether more robust performance can be achieved by composing existing prompting
methods, in which we try an iterative method that generates and verifies each
intermediate thought based on its reasoning goal and calculation result. Code
and data are available at .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要