R^2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding
arxiv(2024)
摘要
Video temporal grounding (VTG) is a fine-grained video understanding problem
that aims to ground relevant clips in untrimmed videos given natural language
queries. Most existing VTG models are built upon frame-wise final-layer CLIP
features, aided by additional temporal backbones (e.g., SlowFast) with
sophisticated temporal reasoning mechanisms. In this work, we claim that CLIP
itself already shows great potential for fine-grained spatial-temporal
modeling, as each layer offers distinct yet useful information under different
granularity levels. Motivated by this, we propose Reversed Recurrent Tuning
(R^2-Tuning), a parameter- and memory-efficient transfer learning framework
for video temporal grounding. Our method learns a lightweight R^2 Block
containing only 1.5
spatial-temporal modeling. Starting from the last layer of CLIP, R^2 Block
recurrently aggregates spatial features from earlier layers, then refines
temporal correlation conditioning on the given query, resulting in a
coarse-to-fine scheme. R^2-Tuning achieves state-of-the-art performance
across three VTG tasks (i.e., moment retrieval, highlight detection, and video
summarization) on six public benchmarks (i.e., QVHighlights, Charades-STA,
Ego4D-NLQ, TACoS, YouTube Highlights, and TVSum) even without the additional
backbone, demonstrating the significance and effectiveness of the proposed
scheme. Our code is available at https://github.com/yeliudev/R2-Tuning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要