Adversarial robustness analysis of LiDAR-included models in autonomous driving

High-Confidence Computing(2024)

引用 0|浏览7
暂无评分
摘要
In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR-camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.
更多
查看译文
关键词
Autonomous driving,LiDAR,Adversarial example,Defense
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要