Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation
arxiv(2024)
摘要
We propose and study a realistic Continual Learning (CL) setting where
learning algorithms are granted a restricted computational budget per time step
while training. We apply this setting to large-scale semi-supervised Continual
Learning scenarios with sparse label rates. Previous proficient CL methods
perform very poorly in this challenging setting. Overfitting to the sparse
labeled data and insufficient computational budget are the two main culprits
for such a poor performance. Our new setting encourages learning methods to
effectively and efficiently utilize the unlabeled data during training. To that
end, we propose a simple but highly effective baseline, DietCL, which utilizes
both unlabeled and labeled data jointly. DietCL meticulously allocates
computational budget for both types of data. We validate our baseline, at
scale, on several datasets, e.g., CLOC, ImageNet10K, and CGLM, under constraint
budget setups. DietCL outperforms, by a large margin, all existing supervised
CL algorithms as well as more recent continual semi-supervised methods. Our
extensive analysis and ablations demonstrate that DietCL is stable under a full
spectrum of label sparsity, computational budget, and various other ablations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要