G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning
arxiv(2024)
摘要
Class incremental learning (CIL) trains a network on sequential tasks with
separated categories but suffers from catastrophic forgetting, where models
quickly lose previously learned knowledge when acquiring new tasks. The
generalized CIL (GCIL) aims to address the CIL problem in a more real-world
scenario, where incoming data have mixed data categories and unknown sample
size distribution, leading to intensified forgetting. Existing attempts for the
GCIL either have poor performance, or invade data privacy by saving historical
exemplars. To address this, in this paper, we propose an exemplar-free
generalized analytic class incremental learning (G-ACIL). The G-ACIL adopts
analytic learning (a gradient-free training technique), and delivers an
analytical solution (i.e., closed-form) to the GCIL scenario. This solution is
derived via decomposing the incoming data into exposed and unexposed classes,
allowing an equivalence between the incremental learning and its joint
training, i.e., the weight-invariant property. Such an equivalence is
theoretically validated through matrix analysis tools, and hence contributes
interpretability in GCIL. It is also empirically evidenced by experiments on
various datasets and settings of GCIL. The results show that the G-ACIL
exhibits leading performance with high robustness compared with existing
competitive GCIL methods. Codes will be ready at
https://github.com/ZHUANGHP/Analytic-continual-learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要