Continual Learning through Networks Splitting and Merging with Dreaming-Meta-Weighted Model Fusion
CoRR(2023)
摘要
It's challenging to balance the networks stability and plasticity in
continual learning scenarios, considering stability suffers from the update of
model and plasticity benefits from it. Existing works usually focus more on the
stability and restrict the learning plasticity of later tasks to avoid
catastrophic forgetting of learned knowledge. Differently, we propose a
continual learning method named Split2MetaFusion which can achieve better
trade-off by employing a two-stage strategy: splitting and meta-weighted
fusion. In this strategy, a slow model with better stability, and a fast model
with better plasticity are learned sequentially at the splitting stage. Then
stability and plasticity are both kept by fusing the two models in an adaptive
manner. Towards this end, we design an optimizer named Task-Preferred Null
Space Projector(TPNSP) to the slow learning process for narrowing the fusion
gap. To achieve better model fusion, we further design a Dreaming-Meta-Weighted
fusion policy for better maintaining the old and new knowledge simultaneously,
which doesn't require to use the previous datasets. Experimental results and
analysis reported in this work demonstrate the superiority of the proposed
method for maintaining networks stability and keeping its plasticity. Our code
will be released.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要