WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation
CoRR(2023)
摘要
Recent work demonstrates that, after being fine-tuned on a high-quality
instruction dataset, the resulting model can obtain impressive capabilities to
address a wide range of tasks. However, existing methods for instruction data
generation often produce duplicate data and are not controllable enough on data
quality. In this paper, we extend the generalization of instruction tuning by
classifying the instruction data to 4 code-related tasks and propose a
LLM-based Generator-Discriminator data process framework to generate diverse,
high-quality instruction data from open source code. Hence, we introduce
CodeOcean, a dataset comprising 20,000 instruction instances across 4 universal
code-related tasks,which is aimed at augmenting the effectiveness of
instruction tuning and improving the generalization ability of fine-tuned
model. Subsequently, we present WaveCoder, a fine-tuned Code LLM with
Widespread And Versatile Enhanced instruction tuning. This model is
specifically designed for enhancing instruction tuning of Code Language Models
(LLMs). Our experiments demonstrate that Wavecoder models outperform other
open-source models in terms of generalization ability across different
code-related tasks at the same level of fine-tuning scale. Moreover, Wavecoder
exhibits high efficiency in previous code generation tasks. This paper thus
offers a significant contribution to the field of instruction data generation
and fine-tuning models, providing new insights and tools for enhancing
performance in code-related tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要