DeepTheft: Stealing DNN Model Architectures through Power Side Channel

CoRR(2023)

引用 0|浏览36
暂无评分
摘要
Deep Neural Network (DNN) models are often deployed in resource-sharing clouds as Machine Learning as a Service (MLaaS) to provide inference services.To steal model architectures that are of valuable intellectual properties, a class of attacks has been proposed via different side-channel leakage, posing a serious security challenge to MLaaS. Also targeting MLaaS, we propose a new end-to-end attack, DeepTheft, to accurately recover complex DNN model architectures on general processors via the RAPL-based power side channel. However, an attacker can acquire only a low sampling rate (1 KHz) of the time-series energy traces from the RAPL interface, rendering existing techniques ineffective in stealing large and deep DNN models. To this end, we design a novel and generic learning-based framework consisting of a set of meta-models, based on which DeepTheft is demonstrated to have high accuracy in recovering a large number (thousands) of models architectures from different model families including the deepest ResNet152. Particularly, DeepTheft has achieved a Levenshtein Distance Accuracy of 99.75% in recovering network structures, and a weighted average F1 score of 99.60% in recovering diverse layer-wise hyperparameters. Besides, our proposed learning framework is general to other time-series side-channel signals. To validate its generalization, another existing side channel is exploited, i.e., CPU frequency. Different from RAPL, CPU frequency is accessible to unprivileged users in bare-metal OSes. By using our generic learning framework trained against CPU frequency traces, DeepTheft has shown similarly high attack performance in stealing model architectures.
更多
查看译文
关键词
dnn model architectures,channel
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要