Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase.

arXiv: Cryptography and Security(2018)

引用 23|浏览83
暂无评分
摘要
present a practical method for protecting data during the inference phase of deep learning based on bipartite topology threat modeling and an interactive adversarial deep network construction. term this approach emph{Privacy Partitioning}. In the proposed framework, we split the machine learning models and deploy a few layers into usersu0027 local devices, and the rest of the layers into a remote server. propose an approach to protect useru0027s data during the inference phase, while still achieve good classification accuracy. We conduct an experimental evaluation of this approach on benchmark datasets of three computer vision tasks. The experimental results indicate that this approach can be used to significantly attenuate the capacity for an adversary with access to the state-of-the-art deep networku0027s intermediate states to learn privacy-sensitive inputs to the network. For example, we demonstrate that our approach can prevent attackers from inferring the private attributes such as gender from the Face image dataset without sacrificing the classification accuracy of the original machine learning task such as Face Identification.
更多
查看译文
关键词
privacy,deep learning inference phase,user data,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要