Would Deep Generative Models Amplify Bias in Future Models?
CVPR 2024(2024)
摘要
We investigate the impact of deep generative models on potential social
biases in upcoming computer vision models. As the internet witnesses an
increasing influx of AI-generated images, concerns arise regarding inherent
biases that may accompany them, potentially leading to the dissemination of
harmful content. This paper explores whether a detrimental feedback loop,
resulting in bias amplification, would occur if generated images were used as
the training data for future models. We conduct simulations by progressively
substituting original images in COCO and CC3M datasets with images generated
through Stable Diffusion. The modified datasets are used to train OpenCLIP and
image captioning models, which we evaluate in terms of quality and bias.
Contrary to expectations, our findings indicate that introducing generated
images during training does not uniformly amplify bias. Instead, instances of
bias mitigation across specific tasks are observed. We further explore the
factors that may influence these phenomena, such as artifacts in image
generation (e.g., blurry faces) or pre-existing biases in the original
datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要