Depth-Aware Sparse Transformer for Video-Language Learning

MM '23: Proceedings of the 31st ACM International Conference on Multimedia(2023)

引用 0|浏览19
暂无评分
摘要
In Video-Language (VL) learning tasks, a massive amount of text annotations are describing geometrical relationships of instances (e.g. 19.6% to 45.0% in MSVD, MSR-VTT, MSVD-QA and MSVRTT-QA), which often become the bottleneck of the current VL tasks (e.g. 60.8% vs. 98.2% CIDEr in MSVD for geometrical and non-geometrical annotations). Considering the rich spatial information of depth map, an intuitive way is to enrich the conventional 2D visual representations with depth information through current SOTA models, e.g. transformer. However, it is cumbersome to compute the self-attention on a long-range sequence and heterogeneous video-level representations with regard to computation cost and flexibility on various frame scales. To tackle this, we propose a hierarchical transformer, termed Depth-Aware Sparse Transformer (DAST). Specifically, to guarantee computational efficiency, a depth-aware sparse attention modular with linear computational complexity is designed for each transformer layer to learn depth-aware 2D representations. Furthermore, we design a hierarchical structure to maintain multi-scale temporal coherence across long-range dependencies. These qualities of DAST make it compatible with a broad range of video-language tasks, including video captioning (achieving MSVD 107.8%, MSR-VTT 52.5% for CIDEr), video question answering (MSVD-QA 44.1%, MSRVTT-QA 39.4%), and video-text matching (MSR-VTT 215.7 for SumR). Our code is available at https://github.com/zchoi/DAST
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要