Video Demoiréing With Deep Temporal Color Embedding and Video-Image Invertible Consistency.

IEEE Trans. Multim.(2024)

Cited 0|Views50
No score
Abstract
Demoiréing is the task of removing moiré patterns, which are commonly caused by the interference between the screen and digital cameras. Although research on single image demoiréing has made great progress, research on video demoiréing has received less attention from the community. Video demoiréing poses a new set of challenges. First, most existing video restoration algorithms rely on multi-resolution pixel-based alignment, which can cause damage to the details of the predicted results. Second, these algorithms are based on flow-based loss or relation-based loss, making it difficult to handle the large motions of adjacent frames while keeping temporal consistency intact. To address these challenges, we present a novel deep learning-based approach called the Deep Temporal Color Embedding network (DTCENet) that employs an invertible network to align distortion color patches in a patch-based embedding framework. DTCENet can well preserve details while eliminate color distortions. Furthermore, we introduce a video-image invertible loss function to effectively handle the color inconsistent problem of adjacent frames. Our approach shows promising results in demoiréing videos, with improved performance over existing state-of-the-art algorithms. Our method gets about 10% improvements in terms of LPIPS and 10.3% improvements in terms of FID compared with the recent SOTA methods.
More
Translated text
Key words
Video demoiréing,temporal consistency,color distortion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined