DistInit: Learning Video Representations Without a Single Labeled Video

2019 IEEE/CVF International Conference on Computer Vision (ICCV)(2019)

引用 73|浏览195
暂无评分
摘要
Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models has not been able to keep up with the ever increasing depth and sophistication of these networks. In this work we propose an alternative approach to learning video representations that requires no semantically labeled videos, and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as “teachers” to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image based models by 16%. We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data.
更多
查看译文
关键词
video representations learning,video recognition models,shallow classifiers,hand-crafted features,deep spatiotemporal networks,labeled video data,semantically labeled videos,clean still-image datasets,spatiotemporal features,still-image networks,video architectures,image based models,spatiotemporal representations,unlabeled video data,DistInit,uncurated raw video data sources,2D teacher models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要