TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
arxiv(2023)
摘要
In recent years, multimodal large language models (MLLMs) such as GPT-4V have
demonstrated remarkable advancements, excelling in a variety of vision-language
tasks. Despite their prowess, the closed-source nature and computational
demands of such models limit their accessibility and applicability. This study
introduces TinyGPT-V, a novel open-source MLLM, designed for efficient training
and inference across various vision-language tasks, including image captioning
(IC) and visual question answering (VQA). Leveraging a compact yet powerful
architecture, TinyGPT-V integrates the Phi-2 language model with pre-trained
vision encoders, utilizing a unique mapping module for visual and linguistic
information fusion. With a training regimen optimized for small backbones and
employing a diverse dataset amalgam, TinyGPT-V requires significantly lower
computational resources 24GB for training and as little as 8GB for inference
without compromising on performance. Our experiments demonstrate that
TinyGPT-V, with its language model 2.8 billion parameters, achieves comparable
results in VQA and image inference tasks to its larger counterparts while being
uniquely suited for deployment on resource-constrained devices through
innovative quantization techniques. This work not only paves the way for more
accessible and efficient MLLMs but also underscores the potential of smaller,
optimized models in bridging the gap between high performance and computational
efficiency in real-world applications. Additionally, this paper introduces a
new approach to multimodal large language models using smaller backbones. Our
code and training weights are available in
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要