Consistent Prompting for Rehearsal-Free Continual Learning
CVPR 2024(2024)
摘要
Continual learning empowers models to adapt autonomously to the ever-changing
environment or data streams without forgetting old knowledge. Prompt-based
approaches are built on frozen pre-trained models to learn the task-specific
prompts and classifiers efficiently. Existing prompt-based methods are
inconsistent between training and testing, limiting their effectiveness. Two
types of inconsistency are revealed. Test predictions are made from all
classifiers while training only focuses on the current task classifier without
holistic alignment, leading to Classifier inconsistency. Prompt inconsistency
indicates that the prompt selected during testing may not correspond to the one
associated with this task during training. In this paper, we propose a novel
prompt-based method, Consistent Prompting (CPrompt), for more aligned training
and testing. Specifically, all existing classifiers are exposed to prompt
training, resulting in classifier consistency learning. In addition, prompt
consistency learning is proposed to enhance prediction robustness and boost
prompt selection accuracy. Our Consistent Prompting surpasses its prompt-based
counterparts and achieves state-of-the-art performance on multiple continual
learning benchmarks. Detailed analysis shows that improvements come from more
consistent training and testing.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要