Few and Fewer: Learning Better from Few Examples Using Fewer Base Classes
CoRR(2024)
摘要
When training data is scarce, it is common to make use of a feature extractor
that has been pre-trained on a large base dataset, either by fine-tuning its
parameters on the “target” dataset or by directly adopting its representation
as features for a simple classifier. Fine-tuning is ineffective for few-shot
learning, since the target dataset contains only a handful of examples.
However, directly adopting the features without fine-tuning relies on the base
and target distributions being similar enough that these features achieve
separability and generalization. This paper investigates whether better
features for the target dataset can be obtained by training on fewer base
classes, seeking to identify a more useful base dataset for a given task.We
consider cross-domain few-shot image classification in eight different domains
from Meta-Dataset and entertain multiple real-world settings (domain-informed,
task-informed and uninformed) where progressively less detail is known about
the target task. To our knowledge, this is the first demonstration that
fine-tuning on a subset of carefully selected base classes can significantly
improve few-shot learning. Our contributions are simple and intuitive methods
that can be implemented in any few-shot solution. We also give insights into
the conditions in which these solutions are likely to provide a boost in
accuracy. We release the code to reproduce all experiments from this paper on
GitHub. https://github.com/RafLaf/Few-and-Fewer.git
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要