FENet: Privacy-preserving Neural Network Training with Functional Encryption

PROCEEDINGS OF THE 9TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2023(2023)

引用 0|浏览10
暂无评分
摘要
Privacy-preserving machine learning (PPML) has been gaining a lot of attention in recent years, and several techniques have been proposed to achieve PPML. Cryptography-based PPML approaches such as Fully Homomorphic Encryption (FHE) and Secure Multiparty Computation (SMC) have been extensively investigated. However, Functional Encryption (FE), which is a newer paradigm, has not been studied as much, and PPML approaches based on FE are in the early stages. Most of the existing FE-based PPML approaches are focused on privacy-preserving inference, and the research work focused on FE-based privacy-preserving training requires a very high training time. To alleviate this issue, this paper presents a privacy-preserving neural network framework using FE that facilitates both training and inference on encrypted data. Our proposed approach is twofold. First, we use Inner-product Functional Encryption (IPFE) and Function-hiding Inner Product Encryption (FHIPE) schemes to develop secure activation functions. To the best of our knowledge, this is the first work to demonstrate the application of FHIPE in PPML. Second, we develop a PPML framework called FENet using the secure activation functions to perform secure forward propagation and backpropagation. Our experimental results show that our framework can successfully train a neural network on the encrypted MNIST dataset with an overall accuracy of 95%. Our work outperforms the state-of-the-art research work in this area, both in terms of reducing the training time by 28x (for IPFE) and 2x (for FHIPE) and improving security.
更多
查看译文
关键词
Privacy-preserving Machine Learning,Functional Encryption,Secure Computation,Secure Activation Function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要