Examining Forgetting in Continual Pre-training of Aligned Large Language Models
CoRR(2024)
摘要
Recent advances in Large Language Models (LLMs) have exhibited remarkable
proficiency across various tasks. Given the potent applications of LLMs in
numerous fields, there has been a surge in LLM development. In developing LLMs,
a common practice involves continual pre-training on previously fine-tuned
models. However, this can lead to catastrophic forgetting. In our work, we
investigate the phenomenon of forgetting that occurs during continual
pre-training on an existing fine-tuned LLM. We evaluate the impact of
continuous pre-training on the fine-tuned LLM across various dimensions,
including output format, knowledge, and reliability. Experiment results
highlight the non-trivial challenge of addressing catastrophic forgetting
during continual pre-training, especially the repetition issue.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要