VCoder: Versatile Vision Encoders for Multimodal Large Language Models
CoRR(2023)
摘要
Humans possess the remarkable skill of Visual Perception, the ability to see
and understand the seen, helping them make sense of the visual world and, in
turn, reason. Multimodal Large Language Models (MLLM) have recently achieved
impressive performance on vision-language tasks ranging from visual
question-answering and image captioning to visual reasoning and image
generation. However, when prompted to identify or count (perceive) the entities
in a given image, existing MLLM systems fail. Working towards developing an
accurate MLLM system for perception and reasoning, we propose using Versatile
vision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the
VCoder with perception modalities such as segmentation or depth maps, improving
the MLLM's perception abilities. Secondly, we leverage the images from COCO and
outputs from off-the-shelf vision perception models to create our COCO
Segmentation Text (COST) dataset for training and evaluating MLLMs on the
object perception task. Thirdly, we introduce metrics to assess the object
perception abilities in MLLMs on our COST dataset. Lastly, we provide extensive
experimental evidence proving the VCoder's improved object-level perception
skills over existing Multimodal LLMs, including GPT-4V. We open-source our
dataset, code, and models to promote research. We open-source our code at
https://github.com/SHI-Labs/VCoder
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要