Bugs in Large Language Models Generated Code: An Empirical Study
arxiv(2024)
摘要
Large Language Models (LLMs) for code have gained significant attention
recently. They can generate code in different programming languages based on
provided prompts, fulfilling a long-lasting dream in Software Engineering (SE),
i.e., automatic code generation. Similar to human-written code, LLM-generated
code is prone to bugs, and these bugs have not yet been thoroughly examined by
the community. Given the increasing adoption of LLM-based code generation tools
(e.g., GitHub Copilot) in SE activities, it is critical to understand the
characteristics of bugs contained in code generated by LLMs. This paper
examines a sample of 333 bugs collected from code generated using three leading
LLMs (i.e., CodeGen, PanGu-Coder, and Codex) and identifies the following 10
distinctive bug patterns: Misinterpretations, Syntax Error, Silly Mistake,
Prompt-biased code, Missing Corner Case, Wrong Input Type, Hallucinated Object,
Wrong Attribute, Incomplete Generation, and Non-Prompted Consideration. The bug
patterns are presented in the form of a taxonomy. The identified bug patterns
are validated using an online survey with 34 LLM practitioners and researchers.
The surveyed participants generally asserted the significance and prevalence of
the bug patterns. Researchers and practitioners can leverage these findings to
develop effective quality assurance techniques for LLM-generated code. This
study sheds light on the distinctive characteristics of LLM-generated code.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要