Adversarial attacks on YOLACT instance segmentation

Computers & Security(2022)

引用 4|浏览6
暂无评分
摘要
Adversarial attacks have stimulated research interests in the field of deep learning security. In terms of autonomous driving technology, instance segmentation can help autonomous vehicles identify a drivable area in a driving environment and a traffic accident will happen once instance segmentation is attacked by adversarial examples. However, only few studies have been done on adversarial attacks about instance segmentation. It would be more meaningful and more practical to research adversarial attacks against instance segmentation. In this paper, we propose the improved Projected Gradient Descent (PGD) to produce adversarial examples on the total loss of You Only Look At CoefficienT (YOLACT) instance segmentation. Firstly, we design the loss function and calculate the adversarial gradient. Then we propose the improved PGD. Finally, we obtain adversarial examples of YOLACT instance segmentation. Our attack method is efficient and powerful in both white-box and black-box attack settings, and is applicable in a variety of neural network architectures. On COCO 2017, under white-box attacks, our method achieves 1.14% box mean average precision (mAP) and 1.50% mask mAP on YOLACT with ResNet101 backbone. We also compare the similarity between clean images and adversarial examples among three different backbones and compare the operation time among three different backbones. We also find that box regression loss, classification loss and mask loss are also effective separately for generating adversarial examples. Our research will provide inspiration for further efforts in efficient and effective defense methods on instance segmentation.
更多
查看译文
关键词
Adversarial attack,Instance segmentation,White-box attack,Deep learning,Traffic environment perception
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要