Self-Supervised Models are Continual Learners

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 87|浏览36
暂无评分
摘要
Self-supervised models have been shown to produce comparable or better visual representations than their su-pervised counterparts when trained offline on unlabeled data at scale. However, their efficacy is catastrophically reduced in a Continual Learning (CL) scenario where data is presented to the model sequentially. In this paper, we show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for CL by adding a predictor network that maps the current state of the repre-sentations to their past state. This enables us to devise a framework for Continual self-supervised visual representation Learning that (i) significantly improves the quality of the learned representations, (ii) is compatible with several state-of-the-art self-supervised objectives, and (iii) needs little to no hyperparameter tuning. We demonstrate the ef-fectiveness of our approach empirically by training six pop-ular self-supervised models in various CL settings. Code: github.com/DonkeyShot21/cassle.
更多
查看译文
关键词
Self-& semi-& meta- Representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要