Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings
CoRR(2023)
摘要
Recent advances in large language models elicit reasoning in a
chain-of-thought that allows models to decompose problems in a human-like
fashion. Though this paradigm improves multi-step reasoning ability in language
models, it is limited by being unimodal and applied mainly to
question-answering tasks. We claim that incorporating visual augmentation into
reasoning is essential, especially for complex, imaginative tasks.
Consequently, we introduce VCoT, a novel method that leverages chain-of-thought
prompting with vision-language grounding to recursively bridge the logical gaps
within sequential data. Our method uses visual guidance to generate synthetic
multimodal infillings that add consistent and novel information to reduce the
logical gaps for downstream tasks that can benefit from temporal reasoning, as
well as provide interpretability into models' multi-step reasoning. We apply
VCoT to the Visual Storytelling and WikiHow summarization datasets and
demonstrate through human evaluation that VCoT offers novel and consistent
synthetic data augmentation beating chain-of-thought baselines, which can be
used to enhance downstream performance.
更多查看译文
关键词
multimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要