Disentangle Propagation and Restoration for Efficient Video Recovery

MM '23: Proceedings of the 31st ACM International Conference on Multimedia(2023)

引用 0|浏览39
暂无评分
摘要
We propose the first framework for accelerating video recovery, which aims to efficiently recover high-quality videos from degraded inputs affected by various deteriorative factors. Although current video recovery methods have achieved excellent performance, their significant computational overhead limits their widespread application. To address this, we present a pioneering study on explicitly disentangling temporal and spatial redundant computation by decomposing the input frame into propagation and restoration regions, thereby achieving significant computational reduction. Specifically, we leverage contrastive learning to learn degradation-invariant features, which overcomes the disturbance of deteriorative factors and enables accurate disentanglement. For the propagation region, we introduce a split-fusion block to address inter-frame variations, efficiently generating high-quality output at a low cost and significantly reducing temporal redundant computation. For the restoration region, we propose an efficient adaptive halting mechanism that requires few extra parameters and can adaptively halt the patch processing, considerably reducing spatial redundant computation. Furthermore, we design patch-adaptive prior regularization to boost efficiency and performance. Our proposed method achieves outstanding results on various video recovery tasks, such as video denoising, video deraining, video dehazing, and video super-resolution, with a 50% ~ 60% reduction in GMAC over the state-of-the-art video recovery methods while maintaining comparable performance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要