Asynchronous Temporal Fields for Action Recognition

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 200|浏览184
暂无评分
摘要
Actions are more than just movements and trajectories: we cook to eat and we hold a cup to drink from it. A thorough understanding of videos requires going beyond appearance modeling and necessitates reasoning about the sequence of activities, as well as the higher-level constructs such as intentions. But how do we model and reason about these? We propose a fully-connected temporal CRF model for reasoning over various aspects of activities that includes objects, actions, and intentions, where the potentials are predicted by a deep network. End-to-end training of such structured models is a challenging endeavor: For inference and learning we need to construct mini-batches consisting of whole videos, leading to mini-batches with only a few videos. This causes high-correlation between data points leading to breakdown of the backprop algorithm. To address this challenge, we present an asynchronous variational inference method that allows efficient end-to-end training. Our method achieves a classification mAP of 22.4% on the Charades benchmark, outperforming the state-of-the-art (17.2% mAP), and offers equal gains on the task of temporal localization.
更多
查看译文
关键词
asynchronous temporal fields,action recognition,higher-level constructs,deep network,structured models,asynchronous variational inference method,efficient end-to-end training,temporal localization,fully-connected temporal CRF model,backprop algorithm,classification mAP,Charades benchmark
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要