Align before Adapt: Leveraging Entity-to-Region Alignments for Generalizable Video Action Recognition
CVPR 2024(2023)
摘要
Large-scale visual-language pre-trained models have achieved significant
success in various video tasks. However, most existing methods follow an "adapt
then align" paradigm, which adapts pre-trained image encoders to model
video-level representations and utilizes one-hot or text embedding of the
action labels for supervision. This paradigm overlooks the challenge of mapping
from static images to complicated activity concepts. In this paper, we propose
a novel "Align before Adapt" (ALT) paradigm. Prior to adapting to video
representation learning, we exploit the entity-to-region alignments for each
frame. The alignments are fulfilled by matching the region-aware image
embeddings to an offline-constructed text corpus. With the aligned entities, we
feed their text embeddings to a transformer-based video adapter as the queries,
which can help extract the semantics of the most important entities from a
video to a vector. This paradigm reuses the visual-language alignment of VLP
during adaptation and tries to explain an action by the underlying entities.
This helps understand actions by bridging the gap with complex activity
semantics, particularly when facing unfamiliar or unseen categories. ALT
demonstrates competitive performance while maintaining remarkably low
computational costs. In fully supervised experiments, it achieves 88.1
accuracy on Kinetics-400 with only 4947 GFLOPs. Moreover, ALT outperforms the
previous state-of-the-art methods in both zero-shot and few-shot experiments,
emphasizing its superior generalizability across various learning scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要