YODA: Teacher-Student Progressive Learning for Language Models
CoRR(2024)
摘要
Although large language models (LLMs) have demonstrated adeptness in a range
of tasks, they still lag behind human learning efficiency. This disparity is
often linked to the inherent human capacity to learn from basic examples,
gradually generalize and handle more complex problems, and refine their skills
with continuous feedback. Inspired by this, this paper introduces YODA, a novel
teacher-student progressive learning framework that emulates the
teacher-student education process to improve the efficacy of model fine-tuning.
The framework operates on an interactive basic-generalized-harder
loop. The teacher agent provides tailored feedback on the student's answers,
and systematically organizes the education process. This process unfolds by
teaching the student basic examples, reinforcing understanding through
generalized questions, and then enhancing learning by posing questions with
progressively enhanced complexity. With the teacher's guidance, the student
learns to iteratively refine its answer with feedback, and forms a robust and
comprehensive understanding of the posed questions. The systematic procedural
data, which reflects the progressive learning process of humans, is then
utilized for model training. Taking math reasoning as a testbed, experiments
show that training LLaMA2 with data from YODA improves SFT with significant
performance gain (+17.01% on GSM8K and +9.98% on MATH). In addition, we find
that training with curriculum learning further improves learning robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要