Attend and Interact: Higher-Order Object Interactions for Video Understanding

arXiv (Cornell University)(2018)

引用 166|浏览181
暂无评分
摘要
Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.
更多
查看译文
关键词
fine-grained video,action recognition,video captioning,Kinetics dataset,ActivityNet Captions dataset,SINet-Caption achieve state-of-the-art performances,higher-order interactions,multiple objects,single object representation,traditional pairwise relationships,open domain large-scale video datasets,higher-order object interactions,complex interactions,inter-related objects,visual relationship detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要