A Generic Obfuscation Framework for Preventing ML-Attacks on Strong-PUFs through Exploitation of DRAM-PUFs.

EuroS&P(2023)

引用 0|浏览1
暂无评分
摘要
Considering the limited power and computational resources available, designing sufficiently secure systems for low-power devices is a difficult problem to tackle. With the ubiquitous adoption of the Internet of Things (IoT) not appearing to be slowing any time soon, resource-constrained security is more important than ever. Physical Unclonable Functions (PUFs) have gained momentum in recent years for their potential to enable strong security through the generation of unique identifiers based on entropy derived from unique manufacturing variations. Strong-PUFs, which are desirable for authentication protocols, have often been shown to be insecure to Machine Learning Modelling Attacks (ML-MA). Recently, some schemes have been proposed to enhance security against ML-MA through post-processing of the PUF; however, often, security is not sufficiently upheld, the scheme requires too large an additional overhead or key data must be insecurely stored in Non-Volatile Memory. In this work, we propose a generic framework for securing Strong-PUFs against ML-MA through obfuscation of challenge and response data by exploiting a DRAM-PUF to supplement a One-Way Function (OWF) which can be implemented using the available resources on an FPGA platform. Our proposed scheme enables reconfigurability, strong security and one-wayness. We conduct ML-MA using various classifiers to thoroughly evaluate the performance of our scheme across multiple 16-bit and 32-bit Arbiter-PUF (APUF) variants, showing our scheme reduces model accuracy to around 50% for each PUF (random guessing) and evaluate the properties of the final responses, demonstrating that ideal uniformity and uniqueness are maintained. Even though we demonstrate our proposal through a DRAM-PUF, our scheme can be extended to work with memory-based PUFs in general.
更多
查看译文
关键词
Physical Unclonable Functions,Strong PUF,DRAM-PUF,Machine-Learning Modelling Attack,PUF Obfuscation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要