Privacy-enhanced generative adversarial network with adaptive noise allocation

KNOWLEDGE-BASED SYSTEMS(2023)

引用 0|浏览24
暂无评分
摘要
Generative adversarial networks (GANs) have become hugely popular by virtue of their impressive ability to generate realistic samples. Although GANs alleviate the arduous data-collection problem, they are prone to memorize training samples as a result of their complex model structure. Thus, GANs may not provide sufficient privacy guarantees, and there is a considerable chance of inadvertently divulging data privacy. To alleviate this issue, we design a privacy-enhanced GAN based on differential privacy. We first integrate truncated concentrated differential privacy technique into GAN for mitigating privacy leakage with tighter privacy bound. Then, according to different privacy demands of users in realworld scenarios, we design two adaptive noise allocation strategies, which enable us to dynamically inject noise into gradients at each iteration. Different strategies provide us with an intuitive handle to adopt a suitable strategy and achieve an elegant compromise between privacy and utility in distinct scenarios. Furthermore, we offer rigorous illustrations from the perspective of privacy preservation and privacy defense to demonstrate that our algorithm can fulfill differential privacy guarantees. Extensive experiments on real-world datasets manifest that our algorithm can generate high-quality samples while achieving an excellent trade-off between model performance and privacy guarantees. (c) 2023 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
generative adversarial network,adaptive noise allocation,privacy-enhanced
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要