Fine-Tuning and Prompt Engineering for Large Language Models-based Code Review Automation
arxiv(2024)
摘要
Context: The rapid evolution of Large Language Models (LLMs) has sparked
significant interest in leveraging their capabilities for automating code
review processes. Prior studies often focus on developing LLMs for code review
automation, yet require expensive resources, which is infeasible for
organizations with limited budgets and resources. Thus, fine-tuning and prompt
engineering are the two common approaches to leveraging LLMs for code review
automation. Objective: We aim to investigate the performance of LLMs-based code
review automation based on two contexts, i.e., when LLMs are leveraged by
fine-tuning and prompting. Fine-tuning involves training the model on a
specific code review dataset, while prompting involves providing explicit
instructions to guide the model's generation process without requiring a
specific code review dataset. Method: We leverage model fine-tuning and
inference techniques (i.e., zero-shot learning, few-shot learning and persona)
on LLMs-based code review automation. In total, we investigate 12 variations of
two LLMs-based code review automation (i.e., GPT- 3.5 and Magicoder), and
compare them with the Guo et al.'s approach and three existing code review
automation approaches. Results: The fine-tuning of GPT 3.5 with zero-shot
learning helps GPT-3.5 to achieve 73.17
al.'s approach. In addition, when GPT-3.5 is not fine-tuned, GPT-3.5 with
few-shot learning achieves 46.38
zero-shot learning. Conclusions: Based on our results, we recommend that (1)
LLMs for code review automation should be fine-tuned to achieve the highest
performance; and (2) when data is not sufficient for model fine-tuning (e.g., a
cold-start problem), few-shot learning without a persona should be used for
LLMs for code review automation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要