Training Data Poisoning in ML CAD: Backdooring DL based Lithographic Hotspot Detectors

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2020)

引用 1|浏览25
暂无评分
摘要
Recent efforts to enhance computer-aided design (CAD) flows have seen the proliferation of machine learning (ML) based techniques. However, despite achieving state-of-the-art performance in many domains, techniques such as deep learning (DL) are susceptible to various adversarial attacks. In this work, we explore the threat posed by training data poisoning attacks where a malicious insider can try to insert backdoors into a deep neural network (DNN) used as part of the CAD flow. Using a case study on lithographic hotspot detection, we explore how an adversary can contaminate training data with specially crafted, yet meaningful, genuinely labeled, and design rule compliant poisoned clips. Our experiments show that very low poisoned/clean data ratio in training data is sufficient to backdoor the DNN; an adversary can “hide" specific hotspot clips at inference time by including a backdoor trigger shape in the …
更多
查看译文
关键词
Computer aided design,design for manufacture,machine learning (ML),robustness,security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要