Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models
arxiv(2024)
摘要
Visual representation learning has been a cornerstone in computer vision,
evolving from supervised learning with human-annotated labels to aligning
image-text pairs from the Internet. Despite recent advancements in multi-modal
large language models (MLLMs), the visual representations they rely on, such as
CLIP embeddings, often lack access to external world knowledge critical for
real-world visual reasoning. In this work, we propose Visual Table, a novel
visual representation tailored for MLLMs. It provides hierarchical text
descriptions of holistic visual scenes, consisting of a scene description and
multiple object-centric descriptions that encompass categories, attributes, and
knowledge at instance level. We further develop a scalable generator for visual
table generation and train it on small-scale annotations from GPT4V. Extensive
evaluations demonstrate that, with generated visual tables as additional visual
representations, our model can consistently outperform the state-of-the-art
(SOTA) MLLMs across diverse benchmarks. When visual tables serve as standalone
visual representations, our model can closely match or even beat the SOTA MLLMs
that are built on CLIP visual embeddings. Our code is available at
https://github.com/LaVi-Lab/Visual-Table.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要