GFR: Generic feature representations for class incremental learning

Neurocomputing(2023)

引用 1|浏览20
暂无评分
摘要
Class incremental learning (CIL) aims to continuously learn new classes while maintaining discrimination for old classes with sequentially coming data. Due to the lack of old-class samples, existing CIL methods fail to learn discriminative representations for both old and new classes simultaneously, resulting in a severe performance drop in old classes, which is the well-known catastrophic forgetting phenomenon. Different from most existing works, we facilitate CIL by learning generic feature representations that perform well in seen and unseen classes. Specifically, we prove that representations with a substantial number of significant singular values benefit CIL via better old knowledge reservation. However, the overly uniform singular value spectrum will hurt the discrimination of current tasks. Furthermore, we propose that increasing the embedding dimension can enhance the number of significant singular values and validate this assumption from two perspectives: adopting different pooling techniques and devising a wider network. Meanwhile, we also prove that satisfactory current task accuracy and old knowledge reservation can be achieved simultaneously. Finally, the simple yet effective generic feature representation regulation (GFR) is devised and incorporated into two baselines. Extensive experiments are conducted on CIFAR100, ImageNet-Subset, and ImageNet. The results show that the proposed method boosts the performance of both baselines with a large margin (2.00%-9.58% on CIFAR100, 0.68%-7.10% on ImageNet-Subset and 1.18%-5.04% on ImageNet) which outperforms existing SOTAs.
更多
查看译文
关键词
Incremental learning,Generic feature representations,Image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要