SEALing Neural Network Models in Encrypted Deep Learning Accelerators

2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC)(2021)

Cited 10|Views25
No score
Abstract
Deep learning (DL) accelerators suffer from a new security problem, i.e., being vulnerable to physical access based attacks. An adversary can easily obtain the entire neural network (NN) model by physically snooping the memory bus that connects the accelerator chip with DRAM memory. Therefore, memory encryption becomes important for DL accelerators to improve their security. Nevertheless, we observe that traditional memory encryption techniques that have been efficiently used in CPU systems cause significant performance degradation when directly used in DL accelerators, due to the big bandwidth gap between the memory bus and the encryption engine. To address this problem, our paper proposes SEAL, a Secure and Efficient Accelerator scheme for deep Learning to enhance the performance of encrypted DL accelerators by improving the data access bandwidth. Specifically, SEAL leverages a criticality-aware smart encryption scheme that identifies partial data having no impact on the security of NN models and allows them to bypass the encryption engine, thus reducing the amount of data to be encrypted without affecting security. Extensive experimental results demonstrate that, compared with existing memory encryption techniques, SEAL achieves 1.34 - 14x overall performance improvement.
More
Translated text
Key words
neural network,memory bus,accelerator chip,DRAM memory,memory encryption,SEAL,DL accelerators,data access bandwidth,criticality-aware smart encryption scheme,deep learning accelerators,secure and efficient accelerator scheme for deep learning,physical access based attacks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined