Deep End2End Voxel2Voxel Prediction

2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2016)

引用 150|浏览265
暂无评分
摘要
Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive preprocessing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.
更多
查看译文
关键词
deep End2End Voxel2Voxel prediction,deep learning methods,video analysis,video classification,video detection,single class label,output variables,time-consuming architecture search,intensive preprocessing,post-processing methods,video semantic segmentation,optical flow estimation,video coloring,raw video,video domains
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要