Background defocus method of image perception guided CycleGAN network

Ye Wu-jian, Lin Zhen-yi,Liu Yi-jun, Liu Cheng-min

CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS(2023)

引用 0|浏览2
暂无评分
摘要
The existing image conversion algorithms based on generative adversarial network often extract the features of the whole input image indiscriminately in the process of background defocus, which makes it difficult for the network to distinguish the front and back scenes of the image, so it is easy to lead to the phenomenon of image distortion. We propose a background virtualization method of image perception guided CycleGAN network. The image perception information is introduced to improve the performance of the model. The image perception information includes attention information and depth of field information. The former is used to guide the network to pay attention to different foreground and background areas, so as to distinguish the foreground and background. The latter is used to enhance the perception information of foreground targets, achieve effective intelligent focusing, and reduce image distortion, making the background defocus better. The experimental results and data show that the method proposed in this paper can effectively distinguish the foreground and background in the process of background defocus, reduce the phenomenon of image distortion, and make the generated effect more real. In addition, in comparison with the image effect generated by the existing methods, the questionnaire survey is used for evaluation. A background virtualization method for image perception guided CycleGAN network is proposed, comparing with SOTA, the image quality generated is the best, and its model size and image generation rate also have obvious advantages of 56.10 MB and 47 ms, respectively.
更多
查看译文
关键词
background defocus,image perception,CycleGAN network,intelligent focusing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要