Evading Deepfake-Image Detectors with White- and Black-Box Attacks

CVPR Workshops(2020)

引用 153|浏览176
暂无评分
摘要
It is now possible to synthesize highly realistic images of people who don't exist. Such content has, for example, been implicated in the creation of fraudulent social-media profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content. We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near-0 studies on a state-of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1 noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.
更多
查看译文
关键词
fraudulent social media profiles,target classifier,synthesizer,image area,image generators,AUC,state- of-the-art classifier,attack case studies,synthetic content,neural network,popular forensic approach,synthetically-generated content,disinformation campaigns,black-box attack,deepfake-image detectors,image-forensic classifiers,significant vulnerabilities
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要