Improved Zero-Shot Classification by Adapting VLMs with Text Descriptions
CoRR(2024)
摘要
The zero-shot performance of existing vision-language models (VLMs) such as
CLIP is limited by the availability of large-scale, aligned image and text
datasets in specific domains. In this work, we leverage two complementary
sources of information – descriptions of categories generated by large
language models (LLMs) and abundant, fine-grained image classification datasets
– to improve the zero-shot classification performance of VLMs across
fine-grained domains. On the technical side, we develop methods to train VLMs
with this "bag-level" image-text supervision. We find that simply using these
attributes at test-time does not improve performance, but our training
strategy, for example, on the iNaturalist dataset, leads to an average
improvement of 4-5
of birds and flowers. Similar improvements are observed in domains where a
subset of the categories was used to fine-tune the model. By prompting LLMs in
various ways, we generate descriptions that capture visual appearance, habitat,
and geographic regions and pair them with existing attributes such as the
taxonomic structure of the categories. We systematically evaluate their ability
to improve zero-shot categorization in natural domains. Our findings suggest
that geographic priors can be just as effective and are complementary to visual
appearance. Our method also outperforms prior work on prompt-based tuning of
VLMs. We plan to release the benchmark, consisting of 7 datasets, which will
contribute to future research in zero-shot recognition.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要