Multi-scale conditional reconstruction generative adversarial network

IMAGE AND VISION COMPUTING(2024)

引用 0|浏览4
暂无评分
摘要
Generative adversarial network has become the factual standard for high-quality image synthesis. However, modeling the distribution of complex datasets (e.g. ImageNet and COCO-Stuff) remains challenging in unsupervised approaches. This is partly due to the imbalance between the generator and the discriminator during training, the discriminator easily defeats the generator because of special views. In this paper, we propose a model called multi-scale conditional reconstruction GAN (MS-GAN). The core concept of MS-GAN is to model the local density implicitly using different scales of instance conditions. Instance conditions are extracted from the target images via a self-supervised learning model. In addition, we alignment the semantic features of the observed instances by adding an additional reconstruction loss to the generator. Our MS-GAN can aggregate instance features at different scales and maximize semantic features. This allows the generator to learn additional comparative knowledge from instance features, leading to a better feature representation, thus improving the performance of the generation task. Experimental results on the ImageNet dataset and the COCO-Stuff dataset show that our method matches or exceeds the original performance in both FID and IS scores compared to the ICGAN framework. Additionally, our precision score on the ImageNet dataset improved from 74.2% to 79.9%.
更多
查看译文
关键词
Generative adversarial network,Unsupervised generation,Multi-scale instance,Reconstructed losses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要