HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding
arxiv(2024)
摘要
Visual grounding, which aims to ground a visual region via natural language,
is a task that heavily relies on cross-modal alignment. Existing works utilized
uni-modal pre-trained models to transfer visual/linguistic knowledge separately
while ignoring the multimodal corresponding information. Motivated by recent
advancements in contrastive language-image pre-training and low-rank adaptation
(LoRA) methods, we aim to solve the grounding task based on multimodal
pre-training. However, there exists significant task gaps between pre-training
and grounding. Therefore, to address these gaps, we propose a concise and
efficient hierarchical multimodal fine-grained modulation framework, namely
HiVG. Specifically, HiVG consists of a multi-layer adaptive cross-modal bridge
and a hierarchical multimodal low-rank adaptation (Hi LoRA) paradigm. The
cross-modal bridge can address the inconsistency between visual features and
those required for grounding, and establish a connection between multi-level
visual and text features. Hi LoRA prevents the accumulation of perceptual
errors by adapting the cross-modal features from shallow to deep layers in a
hierarchical manner. Experimental results on five datasets demonstrate the
effectiveness of our approach and showcase the significant grounding
capabilities as well as promising energy efficiency advantages. The project
page: https://github.com/linhuixiao/HiVG.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要