Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
arxiv(2024)
摘要
The rise of Large Language Models (LLMs) has significantly advanced many
applications on software engineering tasks, particularly in code generation.
Despite the promising performance, LLMs are prone to generate hallucinations,
which means LLMs might produce outputs that deviate from users' intent, exhibit
internal inconsistencies, or misalign with the factual knowledge, making the
deployment of LLMs potentially risky in a wide range of applications. Existing
work mainly focuses on investing the hallucination in the domain of natural
language generation (NLG), leaving a gap in understanding the types and extent
of hallucinations in the context of code generation. To bridge the gap, we
conducted a thematic analysis of the LLM-generated code to summarize and
categorize the hallucinations present in it. Our study established a
comprehensive taxonomy of hallucinations in LLM-generated code, encompassing 5
primary categories of hallucinations depending on the conflicting objectives
and varying degrees of deviation observed in code generation. Furthermore, we
systematically analyzed the distribution of hallucinations, exploring
variations among different LLMs and their correlation with code correctness.
Based on the results, we proposed HalluCode, a benchmark for evaluating the
performance of code LLMs in recognizing hallucinations. Hallucination
recognition and mitigation experiments with HalluCode and HumanEval show
existing LLMs face great challenges in recognizing hallucinations, particularly
in identifying their types, and are hardly able to mitigate hallucinations. We
believe our findings will shed light on future research about hallucination
evaluation, detection, and mitigation, ultimately paving the way for building
more effective and reliable code LLMs in the future.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要