PAID: Perturbed Image Attacks Analysis and Intrusion Detection Mechanism for Autonomous Driving Systems

PROCEEDINGS OF THE 9TH ACM CYBER-PHYSICAL SYSTEM SECURITY WORKSHOP, CPSS 2023(2023)

引用 0|浏览6
暂无评分
摘要
Modern Autonomous Vehicles (AVs) leverage road context information collected through sensors (e.g., LiDAR, radar, and camera) to support the automated driving experience. Once such information is collected, a neural network model predicts subsequent actions that the AV executes. However, state-of-the-art research findings have shown the possibility that an attacker can compromise the accuracy of the neural network model in predicting tasks. Indeed, mispredicting the subsequent actions can cause harmful consequences to the road user's safety. In this paper, we analyze the disruptive impact of adversarial attacks on road context-aware Intrusion Detection System (RAIDS) and propose a solution to mitigate such effects. To this end, we implement five state-of-the-art evasion attacks on vehicle camera images the IDS uses to monitor internal vehicular traffic. Our experimental results underline how this type of attack can reduce the attack detection accuracy of such detectors down to 2.83%. To combat such adversarial attacks, we investigate different countermeasure and propose PAID, a robust context-aware IDS that leverage feature squeezing and GPS to detect intrusions. We evaluate PAID's capability in identifying such attacks, and implementation results confirm that PAID achieves a detection accuracy of up to 93.9%, outperforming RAIDS's performance.
更多
查看译文
关键词
Adversarial Attacks,Autonomous Vehicles,Image Perturbation,Adversarial Intrusion Detection,Road Context
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要