Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models
CoRR(2023)
摘要
Modern Large Vision-Language Models (LVLMs) enjoy the same vision vocabulary
-- CLIP, which can cover most common vision tasks. However, for some special
vision task that needs dense and fine-grained vision perception, e.g.,
document-level OCR or chart understanding, especially in non-English scenarios,
the CLIP-style vocabulary may encounter low efficiency in tokenizing the vision
knowledge and even suffer out-of-vocabulary problem. Accordingly, we propose
Vary, an efficient and effective method to scale up the vision vocabulary of
LVLMs. The procedures of Vary are naturally divided into two folds: the
generation and integration of a new vision vocabulary. In the first phase, we
devise a vocabulary network along with a tiny decoder-only transformer to
produce the desired vocabulary via autoregression. In the next, we scale up the
vanilla vision vocabulary by merging the new one with the original one (CLIP),
enabling the LVLMs can quickly garner new features. Compared to the popular
BLIP-2, MiniGPT4, and LLaVA, Vary can maintain its vanilla capabilities while
enjoying more excellent fine-grained perception and understanding ability.
Specifically, Vary is competent in new document parsing features (OCR or
markdown conversion) while achieving 78.2% ANLS in DocVQA and 36.2% in MMVet.
Our code will be publicly available on the homepage.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要