Multimodal Unsupervised Domain Generalization by Retrieving Across the Modality Gap
CoRR(2024)
摘要
Domain generalization (DG) is an important problem that learns a model which
generalizes to unseen test domains leveraging one or more source domains, under
the assumption of shared label spaces. However, most DG methods assume access
to abundant source data in the target label space, a requirement that proves
overly stringent for numerous real-world applications, where acquiring the same
label space as the target task is prohibitively expensive. For this setting, we
tackle the multimodal version of the unsupervised domain generalization (MUDG)
problem, which uses a large task-agnostic unlabeled source dataset during
finetuning. Our framework does not explicitly assume any relationship between
the source dataset and target task. Instead, it relies only on the premise that
the source dataset can be accurately and efficiently searched in a joint
vision-language space. We make three contributions in the MUDG setting.
Firstly, we show theoretically that cross-modal approximate nearest neighbor
search suffers from low recall due to the large distance between text queries
and the image centroids used for coarse quantization. Accordingly, we propose
paired k-means, a simple clustering algorithm that improves nearest neighbor
recall by storing centroids in query space instead of image space. Secondly, we
propose an adaptive text augmentation scheme for target labels designed to
improve zero-shot accuracy and diversify retrieved image data. Lastly, we
present two simple but effective components to further improve downstream
target accuracy. We compare against state-of-the-art name-only transfer,
source-free DG and zero-shot (ZS) methods on their respective benchmarks and
show consistent improvement in accuracy on 20 diverse datasets. Code is
available: https://github.com/Chris210634/mudg
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要