Video Demoiréing with Deep Temporal Color Embedding and Video-Image Invertible Consistency

IEEE Transactions on Multimedia(2024)

引用 0|浏览19
暂无评分
摘要
Demoiréing is the task of removing moiré patterns, which are commonly caused by the interference between the screen and digital cameras. Although research on single image demoiréing has made great progress, research on video demoiréing has received less attention from the community. Video demoiréing poses a new set of challenges. First, most existing video restoration algorithms rely on multi-resolution pixel-based alignment, which can cause damage to the details of the predicted results. Second, these algorithms are based on flow-based loss or relation-based loss, making it difficult to handle the large motions of adjacent frames while keeping temporal consistency intact. To address these challenges, we present a novel deep learning-based approach called the Deep Temporal Color Embedding network (DTCENet) that employs an invertible network to align distortion color patches in a patch-based embedding framework. DTCENet can well preserve details while eliminate color distortions. Furthermore, we introduce a video-image invertible loss function to effectively handle the color inconsistent problem of adjacent frames. Our approach shows promising results in demoiréing videos, with improved performance over existing state-of-the-art algorithms. Our method gets about 10% improvements in terms of LPIPS and 10.3% improvements in terms of FID compared with the recent SOTA methods.
更多
查看译文
关键词
Video demoiréing,temporal consistency,color distortion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要