Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning.
Neural Computation(2019)
摘要
Humans are able to master a variety of knowledge and skills with ongoing learning. By contrast, dramatic performance degradation is observed when new tasks are added to an existing neural network model. This phenomenon, termed catastrophic forgetting, is one of the major roadblocks that prevent deep neural networks from achieving human-level artificial intelligence. Several research efforts (e.g.,...
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络