Unsupervised Learning using Sequential Verification for Action Recognition.

arXiv: Computer Vision and Pattern Recognition(2016)

引用 59|浏览44
暂无评分
摘要
In this paper, we consider the problem of learning a visual representation from the raw spatiotemporal signals in videos for use in action recognition. Our representation is learned without supervision from semantic labels. We formulate it as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful unsupervised representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. Our method can also be combined with supervised representations to provide an additional boost in accuracy for action recognition. Finally, to quantify its sensitivity to human pose, we show results for human pose estimation on the FLIC dataset that are competitive with approaches using significantly more supervised training data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要