SecureTVM: A TVM-based Compiler Framework for Selective Privacy-preserving Neural Inference

ACM Transactions on Design Automation of Electronic Systems(2023)

引用 1|浏览2
暂无评分
摘要
Privacy-preserving neural inference helps protect both the user input data and the model weights from being leaked to others during the inference of a deep learning model. To achieve data protection, the inference is often performed within a secure domain, and the final result is revealed in plaintext. Nevertheless, performing the computations in the secure domain incurs about a thousandfold overhead compared with the insecure version, especially when the involved operations of the entire model are mapped to the secure domain, which is the computation scheme adopted by the existing works. This work is inspired by the transfer learning technique, where the weights of some parts of the model layers are transferred from a publicly available, pre-built deep learning model, and it opens a door to further boost the execution efficiency by allowing us to do the secure computations selectively on parts of the transferred model. We have built a compiler framework, SecureTVM, to automatically translate a trained model into the secure version, where the model layers to be protected can be selectively configured by its model provider. As a result, SecureTVM outperforms the state of the art, CrypTFlow2, by a factor of 55 for the transfer learning model. We believe that this work takes a step forward toward the practical uses of privacy-preserving neural inference for real-world applications.
更多
查看译文
关键词
compiler framework,tvm-based,privacy-preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要