Robust Source Camera Identification Against Adversarial Attacks
COMPUTERS & SECURITY(2021)
摘要
Application of Deep Neural Networks (DNN) has dramatically improved the performance of Source Camera Identification (SCI), but easily suffers from adversarial attacks. These attacks raise security problems by tampering the identified outcomes with imperceptible noise. To address this issue, we analyze the feature extraction mapping for DNN-based SCI models on manifolds and discover that the vulnerability comes from the oscillation of the mapping. In light of this, we take that the feature extraction mapping should satisfy locally smooth and information monotonicity as a new design principle for robust SCI, and accordingly developed a defensive scheme. The proposed scheme constructs local smooth mapping that guarantees information monotonicity and achieves sufficient statistics by minimizing Kull back Leibler Divergence (KLD) between the local statistic coordinates on two manifolds. To enhance the usability of our method, we implement it with a Pre-Defense Network (PDN) trained by a two-phase training strategy, which ensures robustness, accuracy, and portability. Experiments on Dresden Image Dataset demonstrate that the proposed defense method offers not only strong robustness for the DNN-based SCI model against adversarial attacks, but also yields comparable or even superior identification performance over existing defense methods. Moreover, PDN also shows defense effect when migrated to other DNN- based SCI models, without extra retraining. (C) 2020 Elsevier Ltd. All rights reserved
更多查看译文
关键词
Source camera identification, Robustness, Adversarial attacks, Deep neural networks, Smooth mapping, Information monotonicity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络