Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment
CoRR(2024)
摘要
Deep Learning (DL) is undergoing a paradigm shift with the emergence of
foundation models, aptly named by their crucial, yet incomplete nature. In this
work, we focus on Contrastive Language-Image Pre-training (CLIP), an
open-vocabulary foundation model, which achieves high accuracy across many
image classification tasks and is often competitive with a fully supervised
baseline without being explicitly trained. Nevertheless, there are still
domains where zero-shot CLIP performance is far from optimal, such as Remote
Sensing (RS) and medical imagery. These domains do not only exhibit
fundamentally different distributions compared to natural images, but also
commonly rely on complementary modalities, beyond RGB, to derive meaningful
insights. To this end, we propose a methodology for the purpose of aligning
distinct RS imagery modalities with the visual and textual modalities of CLIP.
Our two-stage procedure, comprises of robust fine-tuning CLIP in order to deal
with the distribution shift, accompanied by the cross-modal alignment of a RS
modality encoder, in an effort to extend the zero-shot capabilities of CLIP. We
ultimately demonstrate our method on the tasks of RS imagery classification and
cross-modal retrieval. We empirically show that both robust fine-tuning and
cross-modal alignment translate to significant performance gains, across
several RS benchmark datasets. Notably, these enhancements are achieved without
the reliance on textual descriptions, without introducing any task-specific
parameters, without training from scratch and without catastrophic forgetting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要