Adversarial example generation method for object detection in remote sensing images

Wanghan Jiang,Yue Zhou,Xue Jiang

IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM(2023)

引用 0|浏览3
暂无评分
摘要
Object detection in remote sensing images is an essential application of deep learning. However, due to the vulnerability of deep learning models, they are susceptible to adversarial attacks, which can undermine their reliability and accuracy. While significant progress has been made in the field of adversarial attacks, most of the work has focused on image classification tasks due to the complexity of object detection. In this paper, we propose a target camouflage method based on adversarial attacks that can mislead detectors and hide targets with minimal pixel perturbations. Experiments on the DIOR dataset demonstrate the effectiveness of our approach. Our method generates adversarial examples that can successfully fool Faster R-CNN into failing to detect objects with minimal perturbations.
更多
查看译文
关键词
Object detection,Adversarial attacks,target camouflage
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要