Rethinking few-shot class-incremental learning: A lazy learning baseline

Expert Systems with Applications(2024)

引用 0|浏览1
暂无评分
摘要
Few-shot class-incremental learning is a step forward in the realm of incremental learning, catering to a more realistic context. In typical incremental learning scenarios, the initial session possesses ample data for effective training. However, subsequent sessions often lack sufficient data, leading the model to simultaneously face the challenges of catastrophic forgetting in incremental learning and overfitting in few-shot learning. Existing methods employ fine-tuning strategy on new session to carefully maintain a balance of plasticity and stability. In this study, we challenge this balance and design a lazy learning baseline that is more biased towards stability: pre-training a feature extractor with initial session data and fine-tuning a cosine classifier. For new sessions, we forgo further training and instead use class prototypes for classification. Experiments across CIFAR100, miniImageNet, and CUB200 benchmarks reveal our approach outperforms state-of-the-art methods. Furthermore, detailed analysis experiments uncover a common challenge in existing few-shot class-incremental learning: the low accuracy of new session classes. We provide insightful explanations for these challenges. Finally, we introduce a new indicator, separate accuracy, designed to more accurately describe the performance of methods in handling both old and new classes. Model weights and source code of our method are available at https://github.com/rumorgin/LLB.
更多
查看译文
关键词
Few-shot class incremental learning,Prototype learning,Pre-training model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要