InsCL: A Data-efficient Continual Learning Paradigm for Fine-tuning Large Language Models with Instructions
arxiv(2024)
摘要
Instruction tuning effectively optimizes Large Language Models (LLMs) for
downstream tasks. Due to the changing environment in real-life applications,
LLMs necessitate continual task-specific adaptation without catastrophic
forgetting. Considering the heavy computational cost, replay-based Continual
Learning (CL) methods are the simplest and most widely used for LLMs to address
the forgetting issue. However, traditional replay-based methods do not fully
utilize instructions to customize the replay strategy. In this work, we propose
a novel paradigm called Instruction-based Continual Learning (InsCL). InsCL
dynamically replays previous data based on task similarity, calculated by
Wasserstein Distance with instructions. Moreover, we further introduce an
Instruction Information Metric (InsInfo) to quantify the complexity and
diversity of instructions. According to InsInfo, InsCL guides the replay
process more inclined to high-quality data. We conduct extensive experiments
over 16 tasks with different training orders, observing consistent performance
improvements of InsCL. When all tasks have been trained, InsCL achieves
performance gains of 3.0 Relative Gain compared with Random Replay, and 27.96
Relative Gain compared with No Replay.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要