Cube Padding for Weakly-Supervised Saliency Prediction in 360{\deg} Videos

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 135|浏览46
暂无评分
摘要
Automatic saliency prediction in 360{\deg} videos is critical for viewpoint guidance applications (e.g., Facebook 360 Guide). We propose a spatial-temporal network which is (1) weakly-supervised trained and (2) tailor-made for 360{\deg} viewing sphere. Note that most existing methods are less scalable since they rely on annotated saliency map for training. Most importantly, they convert 360{\deg} sphere to 2D images (e.g., a single equirectangular image or multiple separate Normal Field-of-View (NFoV) images) which introduces distortion and image boundaries. In contrast, we propose a simple and effective Cube Padding (CP) technique as follows. Firstly, we render the 360{\deg} view on six faces of a cube using perspective projection. Thus, it introduces very little distortion. Then, we concatenate all six faces while utilizing the connectivity between faces on the cube for image padding (i.e., Cube Padding) in convolution, pooling, convolutional LSTM layers. In this way, CP introduces no image boundary while being applicable to almost all Convolutional Neural Network (CNN) structures. To evaluate our method, we propose Wild-360, a new 360{\deg} video saliency dataset, containing challenging videos with saliency heatmap annotations. In experiments, our method outperforms baseline methods in both speed and quality.
更多
查看译文
关键词
annotated saliency map,image boundary,CP,image padding,convolutional LSTM layers,360° video saliency dataset,saliency heatmap annotations,baseline methods,weakly-supervised saliency prediction,360° videos,automatic saliency prediction,viewpoint guidance applications,Facebook 360 Guide,spatial-temporal network,convolutional neural network structures,cube padding,perspective projection,Wild-360
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要