Vamos: Versatile Action Models for Video Understanding.
CoRR(2023)
摘要
What makes good video representations for video understanding, such as
anticipating future activities, or answering video-conditioned questions? While
earlier approaches focus on end-to-end learning directly from video pixels, we
propose to revisit text-based representations, such as discrete action labels,
or free-form video captions, which are interpretable and can be directly
consumed by large language models (LLMs). Intuitively, different video
understanding tasks may require representations that are complementary and at
different granularities. To this end, we propose versatile action models
(Vamos), a learning framework powered by a large language model as the
"reasoner", and can flexibly leverage visual embeddings, action labels, and
free-form descriptions extracted from videos as its input. We evaluate Vamos on
four complementary video understanding benchmarks, Ego4D, Next-QA, IntentQA,
and EgoSchema, on its capability to model temporal dynamics, encode visual
history, and perform reasoning. Surprisingly, we observe that text-based
representations consistently achieve competitive performance on all benchmarks,
and that visual embeddings provide marginal or no performance improvement,
demonstrating the effectiveness of text-based video representation in the LLM
era. We perform extensive ablation study and qualitative analysis to support
our observations, and achieve state-of-the-art performance on three benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要