Mutual Dual-Task Generator With Adaptive Attention Fusion for Image Inpainting

IEEE TRANSACTIONS ON MULTIMEDIA(2024)

引用 0|浏览1
暂无评分
摘要
Image segmentation can reveal the semantic structure information in an image, which is helpful guidance information for image inpainting. Notably, it can help mitigate the artifacts on the boundaries of different semantic regions during the inpainting process. Existing semantic guidance-based image inpainting provides one-way guidance from the semantic segmentation task to the image inpainting task. There is no feedback from the inpainting results to adjust the guidance process, which causes inferior performance. To tackle this issue, this work proposes mutual dual-task generators to establish the interaction between image segmentation and image inpainting tasks. Thus, semantic segmentation guides image inpainting and also receives feedback from image inpainting. These two processes interact with each other and progressively improve the inpainting quality. The mutual dual-task generator consists of a shared encoder and mutual decoders with the bidirectional Cross-domain Feature DeNormalization (CFDN) module inside, which hierarchically models the Segmentation-guided image Texture (ST) generation and Texture-guided semantic Segmentation (TS) generation. At the end of mutual decoders, an Adaptive Attention Fusion (AAF) module is proposed to augment the texture and semantic class affinity between pixels, further refining the inpainted results. Experimental results demonstrate that the proposed mutual dual-task generator pipeline achieves superior inpainting performances over the state of the arts on three public datasets.
更多
查看译文
关键词
Image inpainting,semantic segmentation guidance,attention fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要