Blinding and blurring the multi-object tracker with adversarial perturbations

Neural Networks(2024)

引用 0|浏览0
暂无评分
摘要
Adversarial attack reveals a potential imperfection in deep models that they are susceptible to being tricked by imperceptible perturbations added to images. Recent deep multi-object trackers combine the functionalities of detection and association, rendering attacks on either the detector or the association component an effective means of deception. Existing attacks focus on increasing the frequency of ID switching, which greatly damages tracking stability, but is not enough to make the tracker completely ineffective. To fully explore the potential of adversarial attacks, we propose Blind-Blur Attack (BBA), a novel attack method based on spatio-temporal motion information to fool multi-object trackers. Specifically, a simple but efficient perturbation generator is trained with the blind-blur loss, simultaneously making the target invisible to the tracker and letting the background be regarded as moving targets. We take TraDeS as our main research tracker, and verify our attack method on other excellent algorithms (i.e., CenterTrack, FairMOT, and ByteTrack) on MOT-Challenge benchmark datasets (i.e., MOT16, MOT17, and MOT20). BBA attack reduced the MOTA of TraDeS and ByteTrack from 69.1 and 80.3 to -238.1 and -357.0, respectively, indicating that it is an efficient method with a high degrees of transferability.
更多
查看译文
关键词
Multi-object tracking,Adversarial attack,Object detection,Computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要