Evaluating Large Language Models with Runtime Behavior of Program Execution
arxiv(2024)
摘要
Large language models for code (i.e., code LLMs) have shown strong code
understanding and generation capabilities. To evaluate the capabilities of code
LLMs in various aspects, many benchmarks have been proposed (e.g., HumanEval
and ClassEval). Code reasoning is one of the most essential abilities of code
LLMs, but existing benchmarks for code reasoning are not sufficient. Typically,
they focus on predicting the input and output of a program, ignoring the
evaluation of the intermediate behavior during program execution, as well as
the logical consistency (e.g., the model should not give the correct output if
the prediction of execution path is wrong) when performing the reasoning. To
address these problems, in this paper, we propose a framework, namely REval,
for evaluating code reasoning abilities and consistency of code LLMs with
program execution. We utilize existing code benchmarks and adapt them to new
benchmarks within our framework. A large-scale empirical study is conducted and
most LLMs show unsatisfactory performance on both Runtime Behavior Reasoning
(i.e., an average accuracy of 44.4
(i.e., an average IC score of 10.3). Evaluation results of current code LLMs
reflect the urgent need for the community to strengthen the code reasoning
capability of code LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要