Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment
CoRR(2024)
摘要
Evaluating and Rethinking the current landscape of Large Multimodal Models
(LMMs), we observe that widely-used visual-language projection approaches
(e.g., Q-former or MLP) focus on the alignment of image-text descriptions yet
ignore the visual knowledge-dimension alignment, i.e., connecting visuals to
their relevant knowledge. Visual knowledge plays a significant role in
analyzing, inferring, and interpreting information from visuals, helping
improve the accuracy of answers to knowledge-based visual questions. In this
paper, we mainly explore improving LMMs with visual-language knowledge
alignment, especially aimed at challenging knowledge-based visual question
answering (VQA). To this end, we present a Cognitive Visual-Language Mapper
(CVLM), which contains a pretrained Visual Knowledge Aligner (VKA) and a
Fine-grained Knowledge Adapter (FKA) used in the multimodal instruction tuning
stage. Specifically, we design the VKA based on the interaction between a small
language model and a visual encoder, training it on collected image-knowledge
pairs to achieve visual knowledge acquisition and projection. FKA is employed
to distill the fine-grained visual knowledge of an image and inject it into
Large Language Models (LLMs). We conduct extensive experiments on
knowledge-based VQA benchmarks and experimental results show that CVLM
significantly improves the performance of LMMs on knowledge-based VQA (average
gain by 5.0
respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要