Multi-Grained Cross-modal Alignment for Learning Open-vocabulary Semantic Segmentation from Text Supervision
arxiv(2024)
摘要
Recently, learning open-vocabulary semantic segmentation from text
supervision has achieved promising downstream performance. Nevertheless,
current approaches encounter an alignment granularity gap owing to the absence
of dense annotations, wherein they learn coarse image/region-text alignment
during training yet perform group/pixel-level predictions at inference. Such
discrepancy leads to suboptimal learning efficiency and inferior zero-shot
segmentation results. In this paper, we introduce a Multi-Grained Cross-modal
Alignment (MGCA) framework, which explicitly learns pixel-level alignment along
with object- and region-level alignment to bridge the granularity gap without
any dense annotations. Specifically, MGCA ingeniously constructs pseudo
multi-granular semantic correspondences upon image-text pairs and collaborates
with hard sampling strategies to facilitate fine-grained cross-modal
contrastive learning. Further, we point out the defects of existing group and
pixel prediction units in downstream segmentation and develop an adaptive
semantic unit which effectively mitigates their dilemmas including under- and
over-segmentation. Training solely on CC3M, our method achieves significant
advancements over state-of-the-art methods, demonstrating its effectiveness and
efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要