IVTEN: Integration of Visual-Textual Entities for Temporal Activity Localization via Language in Video

INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2022(2022)

引用 0|浏览5
暂无评分
摘要
In the untrimmed video, this study investigates the major roadblock of temporal activity localization via language (TALL). It's a difficult undertaking because the target temporal activity necessitates a thorough comprehension so that the query can readily match it with the activity. Sliding windows, regression, and ranking were utilized in previous ways to handle the query without thoroughly analyzing the visual and textual portions, resulting in performance degradation. Our suggested design, the Integration of visual-textual entities network (IVTEN), comprises of three sub-modules: (1) visual encoder, (2) textual encoder, and (3) cross-modal attention fusion (CMAF). The visual encoder aids in the extraction of visual features and their placement in a collective embedding space. The textual encoder supports in the extraction of word features and their incorporation into the collective embedding space. (CMAF) integrates several modalities (activity, query). On three typical benchmark datasets, our IVTEN approach outperforms the state of the art: Charades-STA, TACoS, and ActivityNet-Captions.
更多
查看译文
关键词
Moment localization in video, activity localization, Moment localization, Cross-Modal Interactions, Single Moment Retreival.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要