Incorporating Visual Experts to Resolve the Information Loss in Multimodal Large Language Models
CoRR(2024)
摘要
Multimodal Large Language Models (MLLMs) are experiencing rapid growth,
yielding a plethora of noteworthy contributions in recent months. The
prevailing trend involves adopting data-driven methodologies, wherein diverse
instruction-following datasets are collected. However, a prevailing challenge
persists in these approaches, specifically in relation to the limited visual
perception ability, as CLIP-like encoders employed for extracting visual
information from inputs. Though these encoders are pre-trained on billions of
image-text pairs, they still grapple with the information loss dilemma, given
that textual captions only partially capture the contents depicted in images.
To address this limitation, this paper proposes to improve the visual
perception ability of MLLMs through a mixture-of-experts knowledge enhancement
mechanism. Specifically, we introduce a novel method that incorporates
multi-task encoders and visual tools into the existing MLLMs training and
inference pipeline, aiming to provide a more comprehensive and accurate
summarization of visual inputs. Extensive experiments have evaluated its
effectiveness of advancing MLLMs, showcasing improved visual perception
achieved through the integration of visual experts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要