Scene-Adaptive Temporal Stabilisation for Video Colourisation Using Deep Video Priors.

ECCV Workshops (4)(2022)

引用 0|浏览8
暂无评分
摘要
Automatic image colourisation methods applied independently to each video frame usually lead to flickering artefacts or propagation of errors because of differences between neighbouring frames. While this can be partially solved using optical flow methods, complex scenarios such as the appearance of new objects in the scene limit the efficiency of such solutions. To address this issue, we propose application of blind temporal consistency, learned during the inference stage, to consistently adapt colourisation to the given frames. However, training at test time is extremely time-consuming and its performance is highly dependent on the content, motion, and length of the input video, requiring a large number of iterations to generalise to complex sequences with multiple shots and scene changes. This paper proposes a generalised framework for colourisation of complex videos with an optimised few-shot training strategy to learn scene-aware video priors. The proposed architecture is jointly trained to stabilise the input video and to cluster its frames with the aim of learning scene-specific modes. Experimental results show performance improvement in complex sequences while requiring less training data and significantly fewer iterations.
更多
查看译文
关键词
video colourisation,deep video priors,scene-adaptive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要