Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)
摘要
Generative models have demonstrated revolutionary success in various visual
creation tasks, but in the meantime, they have been exposed to the threat of
leaking private information of their training data. Several membership
inference attacks (MIAs) have been proposed to exhibit the privacy
vulnerability of generative models by classifying a query image as a training
dataset member or nonmember. However, these attacks suffer from major
limitations, such as requiring shadow models and white-box access, and either
ignoring or only focusing on the unique property of diffusion models, which
block their generalization to multiple generative models. In contrast, we
propose the first generalized membership inference attack against a variety of
generative models such as generative adversarial networks, [variational]
autoencoders, implicit functions, and the emerging diffusion models. We
leverage only generated distributions from target generators and auxiliary
non-member datasets, therefore regarding target generators as black boxes and
agnostic to their architectures or application scenarios. Experiments validate
that all the generative models are vulnerable to our attack. For instance, our
work achieves attack AUC $>0.99$ against DDPM, DDIM, and FastDPM trained on
CIFAR-10 and CelebA. And the attack against VQGAN, LDM (for the
text-conditional generation), and LIIF achieves AUC $>0.90.$ As a result, we
appeal to our community to be aware of such privacy leakage risks when
designing and publishing generative models.
更多查看译文
关键词
Algorithms,Explainable,fair,accountable,privacy-preserving,ethical computer vision,Algorithms,Generative models for image,video,3D,etc
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要