Common and Innovative Visuals: A Sparsity Modeling Framework for Video

IEEE Transactions on Image Processing(2014)

引用 25|浏览27
暂无评分
摘要
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (1) a common frame, which describes the visual information common to all the frames in the scene/segment and (2) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by common and innovative visuals (CIV). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting), and scene change detection are presented to demonstrate the efficiency and performance of the proposed model.
更多
查看译文
关键词
compressed sensing,image representation,object tracking,spatiotemporal phenomena,video coding,CIV,common and innovative visual,compressed sensing,object tracking,scene change detection,sparsity modeling framework,spatiotemporal information,video analysis,video editing,video frame,video processing,video representation model,video segment,Video models,common and innovative parts,compressed sensing,sparse coding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要