Do CLIPs Always Generalize Better than ImageNet Models?
arxiv(2024)
摘要
Large vision language models, such as CLIPs, have revolutionized modern
machine learning. CLIPs have demonstrated great generalizability under
distribution shifts, supported by an increasing body of literature. However,
the evaluation datasets for CLIPs are variations primarily designed for
ImageNet benchmarks, which may not fully reflect the extent to which CLIPs,
e.g., pre-trained on LAION, robust to spurious correlations. To bridge the gap,
we collect a real-world dataset called CounterAnimal that contains realistic
spurious features found in animal photos. CounterAnimal consists of a) the
common group: comprising animals on common backgrounds, and b) the counter
group: including animals on unusual backgrounds. The performance drops from the
common to counter groups quantify the reliance of models on spurious features
(i.e., backgrounds) to predict the animals. We find that CLIPs trained on
either LAION or the OpenAI data exhibit notable performance drops on the
counter group. Surprisingly, we observe that single-modal models trained on
ImageNet are more robust than CLIPs. We provide both theoretical and empirical
explanations for why CLIPs still learn spurious features. Our findings suggest
that distribution shifts remain an open problem for CLIPs, and one needs to be
cautious about test setups when evaluating foundation models pre-trained on a
significantly different scale and distribution.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要