Coherent adversarial deepfake video generation

Signal Processing(2023)

引用 0|浏览124
暂无评分
摘要
Deepfake video has been rapidly developed and attracted public concerns due to its potential wide appli-cations, deepfake videos can be easily distinguished by DNN-based detection approaches. As the vulnera-bility of DNNs, the adversarial attack can be an effective way to deteriorate the ability of deepfake detec-tion, but current adversarial attack techniques are commonly designed for individual images, which are easily perceived at the video level. To reveal the weakness of current attack methods, we first propose a robust detector utilizing the temporal consistency to discriminate between the clean and perturbed ones aiming at weakly adversarial deepfake videos, achieving maximum success rates of 100%. Then we pro-pose a novel framework for generating high-quality adversarial deepfake videos which can fool deepfake detectors and evade the detection of adversarial perturbations simultaneously. Two pivotal techniques are utilized for improving the visual quality and the imperceptibility of adversarial perturbations: (i) Optical flow is adopted to restrict the temporal coherence of adversarial perturbations among frames; (ii) An adaptive distortion cost that can measure the complexity of each frame and help to keep the adversarial modification imperceptible. We demonstrate the effectiveness of our methods in disrupting representative DNN-based deepfake detectors. Extensive experiments are conducted to show the great improvement in coherence, visual quality, and imperceptibility of the adversarial deepfake videos. Furthermore, We hope that our adversarial deepfake generation framework can shed some light on the detection methods to fix their weakness.(c) 2022 Published by Elsevier B.V.
更多
查看译文
关键词
Deepfake detection,Adversarial attack,Anti -forensics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要