Video Saliency Prediction With Optimized Optical Flow And Gravity Center Bias

2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME)(2016)

引用 33|浏览24
暂无评分
摘要
Dynamic videos are viewed fundamentally different from static images. Besides spatial features, motion feature also plays an important role as a temporal factor. Most existing video saliency models usually employ optical flow to represent the motion feature. However, optical flow often suffers from the discontinuity problem. And we also notice that human fixations in one single video frame are much sparser than that in an identical still picture. However, many spatial saliency models take each video frame as static image independently. In this paper, we predict the dynamic visual saliency by fusing spatial and temporal features. In order to construct the temporal relationships among a set of successive frames, we introduce a smoothness operator in optical flow field to obtain more accurate motion feature. Then, considering the sparse property of video saliency, we adapt the weights of the regions surrounding to the saliency gravity center in the final maps. The experiments show that our model is more consistent with humans eye-tracking benchmarks than the state-of-the-art models.
更多
查看译文
关键词
video saliency,optical flow,temporal superpixel,gravity center
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要