Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models
CoRR(2024)
摘要
State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated
exceptional capabilities in vision-language tasks. Despite their advanced
functionalities, the performances of LMMs are still limited in challenging
scenarios that require complex reasoning with multiple levels of visual
information. Existing prompting techniques for LMMs focus on either improving
textual reasoning or leveraging tools for image preprocessing, lacking a simple
and general visual prompting scheme to promote vision-language coordination in
LMMs. In this work, we propose Scaffold prompting that scaffolds coordinates to
promote vision-language coordination. Specifically, Scaffold overlays a dot
matrix within the image as visual information anchors and leverages
multi-dimensional coordinates as textual positional references. Extensive
experiments on a wide range of challenging vision-language tasks demonstrate
the superiority of Scaffold over GPT-4V with the textual CoT prompting. Our
code is released in https://github.com/leixy20/Scaffold.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要