Video Stitching With Spatial-Temporal Content-Preserving Warping

2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2015)

引用 137|浏览79
暂无评分
摘要
We propose a novel algorithm for stitching multiple synchronized video streams into a single panoramic video with spatial-temporal content-preserving warping. Compared to image stitching, video stitching faces several new challenges including temporal coherence, dominate foreground objects moving across views, and camera jittering. To overcome these issues, the proposed algorithm draws upon ideas from recent local warping methods in image stitching and video stabilization. For video frame alignment, we propose spatial-temporal local warping, which locally aligns frames from different videos while maintaining the temporal consistency. For aligned video frame composition, we find stitching seams with 3D graphcut on overlapped spatial-temporal volumes, where the 3D graph is weighted with object and motion saliency to reduce stitching artifacts. Experimental results show the advantages of the proposed algorithm over several state-of-the-art alternatives, especially in challenging conditions.
更多
查看译文
关键词
spatial-temporal content-preserving warping,video stream,single panoramic video,image stitching,video stitching face,camera jittering,local warping method,video stabilization,video frame alignment,video frame composition,over-lapped spatial-temporal volume
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要