Intelligent Learning Rate Distribution to reduce Catastrophic Forgetting in Transformers
arxiv(2024)
摘要
Pretraining language models on large text corpora is a common practice in
natural language processing. Fine-tuning of these models is then performed to
achieve the best results on a variety of tasks. In this paper, we investigate
the problem of catastrophic forgetting in transformer neural networks and
question the common practice of fine-tuning with a flat learning rate for the
entire network in this context. We perform a hyperparameter optimization
process to find learning rate distributions that are better than a flat
learning rate. We combine the learning rate distributions thus found and show
that they generalize to better performance with respect to the problem of
catastrophic forgetting. We validate these learning rate distributions with a
variety of NLP benchmarks from the GLUE dataset.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要