Robust Attack on Deep Learning based Radar HRRP Target Recognition

Asia-Pacific Signal and Information Processing Association Annual Summit and Conference(2019)

引用 5|浏览9
暂无评分
摘要
In the past few years, deep learning have attracted increasing attention for HRRP-based radar automatic target recognition(RATR) because of their powerful ability to learn features from training data automatically. However, recent studies show that deep learning models are vulnerable to adversarial examples. In this paper, we verified adversarial examples also exist in the deep learning based HRRP target recognition. A novel adversarial attack algorithm called Robust HRRP Attack(RHA) is proposed to generate robust adversarial perturbations in realworld. Experimental results on measured HRRP data show that RHA decrease HRRP recognition performance significantly which indicate our method is efficient and robust.
更多
查看译文
关键词
novel adversarial attack algorithm,robust adversarial perturbations,measured HRRP data,HRRP recognition performance,radar HRRP target recognition,deep learning models,adversarial examples,robust HRRP attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要