Adversarial Attack for Deep Steganography Based on Surrogate Training and Knowledge Diffusion

APPLIED SCIENCES-BASEL(2023)

引用 0|浏览6
暂无评分
摘要
Deep steganography (DS), using neural networks to hide one image in another, has performed well in terms of invisibility, embedding capacity, etc. Current steganalysis methods for DS can only detect or remove secret images hidden in natural images and cannot analyze or modify secret content. Our technique is the first approach to not only effectively prevent covert communications using DS, but also analyze and modify the content of covert communications. We proposed a novel adversarial attack method for DS considering both white-box and black-box scenarios. For the white-box attack, several novel loss functions were applied to construct a gradient- and optimizer-based adversarial attack that could delete and modify secret images. As a more realistic case, a black-box method was proposed based on surrogate training and a knowledge distillation technique. All methods were tested on the Tiny ImageNet and MS COCO datasets. The experimental results showed that the proposed attack method could completely remove or even modify the secret image in the container image while maintaining the latter's high quality. More importantly, the proposed adversarial attack method can also be regarded as a new DS approach.
更多
查看译文
关键词
adversarial examples,deep hiding,deep steganography,deep steganalysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要