From Image To Video Face Inpainting: Spatial-Temporal Nested Gan (Stn-Gan) For Usability Recovery

2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)(2020)

引用 4|浏览3
暂无评分
摘要
In this paper, we propose to use constrained inpainting methods to recover usability of corrupted images. Here we focus on the example of face images that are masked for privacy protection but complete images are required for further algorithm development. The task is tackled in a progressive manner: 1) the generated images should look realistic; 2) the generated images must satisfy spatial constraints, if available; 3) when applied to video data, temporal consistency should be retained. We first present a spatial inpainting framework to synthesize face images which can incorporate spatial constraints, provided as positions of facial markers and show that it outperforms state-of-the-art methods. Next, we propose Spatial-Temporal Nested GAN (STN-GAN) to adapt image inpainting framework, trained on similar to 200k images, to video data by incorporating temporal information using residual blocks. Experiments on multiple public datasets show STN-GAN attains spatio-temporal consistency effectively and efficiently. Furthermore, we show that the spatial constraints can be perturbed to obtain different inpainted results from a single source.
更多
查看译文
关键词
video data,spatial inpainting framework,face images,image inpainting framework,temporal information,video face inpainting,corrupted images,STN-GAN,spatial-temporal nested GAN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要