Language Model Guided Interpretable Video Action Reasoning
CVPR 2024(2024)
摘要
While neural networks have excelled in video action recognition tasks, their
black-box nature often obscures the understanding of their decision-making
processes. Recent approaches used inherently interpretable models to analyze
video actions in a manner akin to human reasoning. These models, however,
usually fall short in performance compared to their black-box counterparts. In
this work, we present a new framework named Language-guided Interpretable
Action Recognition framework (LaIAR). LaIAR leverages knowledge from language
models to enhance both the recognition capabilities and the interpretability of
video models. In essence, we redefine the problem of understanding video model
decisions as a task of aligning video and language models. Using the logical
reasoning captured by the language model, we steer the training of the video
model. This integrated approach not only improves the video model's
adaptability to different domains but also boosts its overall performance.
Extensive experiments on two complex video action datasets, Charades CAD-120,
validates the improved performance and interpretability of our LaIAR framework.
The code of LaIAR is available at https://github.com/NingWang2049/LaIAR.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要