Sphinx: Enabling Privacy-Preserving Online Learning over the Cloud

IEEE Symposium on Security and Privacy (S&P)(2022)

引用 18|浏览50
暂无评分
摘要
With the growing complexity of deep learning applications, users have started to delegate their data and models to the cloud. Among these applications, online learning services, which involve both training and inference procedures, are widely deployed. To ensure privacy guarantee on the public cloud, researchers have proposed a plethora of privacy-preserving deep learning algorithms with different techniques, ranging from obfuscation mechanisms to cryptographic tools. However, none of them is applicable to online learning services. They either focus only on inference or training procedure while ignoring the other, or require non-colluding or trusted third parties. In this paper, we present Sphinx, an efficient and privacy-preserving online deep learning system without any trusted third parties. Sphinx strikes a balance between model performance, computational efficiency, and privacy preservation with systematical optimizations on both private inference and training protocols. At its core, Sphinx synthesizes homomorphic encryption and differential privacy reciprocally to maintain the model by keeping most of its parameters as plaintexts, enabling fast training and inference protocol designs. Meanwhile, by refining the homomorphic operation behaviors, Sphinx avoids most of the heavyweight homomorphic operations and minimizes the communication cost. As a result, Sphinx is able to reduce the training time significantly while achieving real-time inference without exposing user privacy. In our experiments, we find that compared to the pure homomorphic encryption solution, Sphinx is $35 \times$ faster for training and 4 orders of magnitude faster for inference, providing real-time inference response (0.05 seconds for MNIST and 0.08 seconds for CIFAR-10). Our experiments also demonstrate that Sphinx achieves promising model accuracy under a tight privacy budget (96% accuracy under $\epsilon=2, \delta=10^{-5}$ for MNIST) without a trusted data aggregator, and is more robust against practical reconstruction attacks.
更多
查看译文
关键词
user privacy,real-time inference response,online learning services,public cloud,privacy preservation,private inference,training protocols,Sphinx,differential privacy,homomorphic encryption,privacy preservation online deep learning system,MNIST dataset,CIFAR-10 dataset,cryptographic tools
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要