Discovering Novel Actions from Open World Egocentric Videos with Object-Grounded Visual Commonsense Reasoning
arxiv(2023)
摘要
Learning to infer labels in an open world, i.e., in an environment where the
target “labels” are unknown, is an important characteristic for achieving
autonomy. Foundation models, pre-trained on enormous amounts of data, have
shown remarkable generalization skills through prompting, particularly in
zero-shot inference. However, their performance is restricted to the
correctness of the target label's search space, i.e., candidate labels provided
in the prompt. This target search space can be unknown or exceptionally large
in an open world, severely restricting their performance. To tackle this
challenging problem, we propose a two-step, neuro-symbolic framework called
ALGO - Action Learning with Grounded Object recognition that uses symbolic
knowledge stored in large-scale knowledge bases to infer activities in
egocentric videos with limited supervision. First, we propose a neuro-symbolic
prompting approach that uses object-centric vision-language models as a noisy
oracle to ground objects in the video through evidence-based reasoning. Second,
driven by prior commonsense knowledge, we discover plausible activities through
an energy-based symbolic pattern theory framework and learn to ground
knowledge-based action (verb) concepts in the video. Extensive experiments on
four publicly available datasets (EPIC-Kitchens, GTEA Gaze, GTEA Gaze Plus, and
Charades-Ego) demonstrate its performance on open-world activity inference. We
also show that ALGO can be extended to zero-shot inference and demonstrate its
competitive performance on the Charades-Ego dataset.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要