Overcoming catastrophic forgetting in neural networks
Proceedings of the National Academy of Sciences of the United States of America, Volume abs/1612.00796, Issue 13, 2017.
We propose a dedicated continual learning method for graph neural networks, which is to our best knowledge the first attempt along this line
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train...More
PPT (Upload PPT)