Defeating Strong PUF Modeling Attack via Adverse Selection of Challenge-Response Pairs

2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)(2018)

引用 3|浏览20
暂无评分
摘要
Most advances in increasing the security of PUFs come from alterations to the topology of the mechanism or the measurement used to generate their output. This paper focuses on a different form of improving security: Selecting a subset of challenges and responses to conceal the patterns inherent in a PUF well enough to prevent an attacker from successfully replicating the PUF's responses. Our results show that it is possible to select a large set of CRPs that can be exposed to an attacker resulting in a modeling accuracy as low as 74%, while without our selection process the accuracy increases to 93%.
更多
查看译文
关键词
Physically unclonable functions,Modeling Attacks,CRP selection,Threat mitigation,Adversarial Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要