Exemplar-based video colorization with long-term spatiotemporal dependency

KNOWLEDGE-BASED SYSTEMS(2024)

引用 0|浏览7
暂无评分
摘要
Exemplar-based video colorization is an essential technique for applications like old movie restoration. Although recent methods perform well in still scenes or scenes with regular movement, they always lack robustness in moving scenes due to their weak ability to model long-term dependency both spatially and temporally, leading to color fading, color discontinuity, or other artifacts. To solve this problem, we propose an exemplar-based video colorization framework with long-term spatiotemporal dependency. To enhance the long-term spatial dependency, a parallelized CNN-Transformer block and a double-head non-local operation are designed. The proposed CNN-Transformer block can better incorporate the long-term spatial dependency with local texture and structural features, and the double-head non-local operation further exploits the performance of the augmented feature. While for the long-term temporal dependency enhancement, we further introduce the novel Linkage subnet. The Linkage subnet propagates motion information across adjacent frame blocks and helps to maintain temporal continuity. Experiments demonstrate that our model outperforms recent state -of-the-art methods both quantitatively and qualitatively. Also, our model can generate more colorful, realistic and stabilized results, especially for scenes where objects change greatly and irregularly.
更多
查看译文
关键词
Video colorization,Exemplar-based,Moving scenes,Long-term dependency,Spatiotemporal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要