Label Propagation for Zero-shot Classification with Vision-Language Models
arxiv(2024)
摘要
Vision-Language Models (VLMs) have demonstrated impressive performance on
zero-shot classification, i.e. classification when provided merely with a list
of class names. In this paper, we tackle the case of zero-shot classification
in the presence of unlabeled data. We leverage the graph structure of the
unlabeled data and introduce ZLaP, a method based on label propagation (LP)
that utilizes geodesic distances for classification. We tailor LP to graphs
containing both text and image features and further propose an efficient method
for performing inductive inference based on a dual solution and a
sparsification step. We perform extensive experiments to evaluate the
effectiveness of our method on 14 common datasets and show that ZLaP
outperforms the latest related works. Code:
https://github.com/vladan-stojnic/ZLaP
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要