Preserving Differential Privacy Between Features in Distributed Estimation.

STAT(2018)

引用 4|浏览41
暂无评分
摘要
Privacy is crucial in many applications of machine learning. Legal, ethical and societal issues restrict the sharing of sensitive data, making it difficult to learn from data sets that are partitioned between many parties. One important instance of such a distributed setting arises when information about each record in the data set is held by different data owners (the design matrix is "vertically partitioned").In this setting, few approaches exist for private data sharing for the purpose of statistical estimation, and the classical set-up of differential privacy with a "trusted curator" preparing the data does not apply. We work with the notion of (epsilon,delta)-distributed differential privacy, which extends single-party differential privacy to the distributed, vertically partitioned case. We propose PRIDE, a scalable framework for distributed estimation where each party communicates perturbed random projections of their locally held features ensuring (epsilon,delta)-distributed differential privacy is preserved. For l(2)-penalized supervised learning problems, PRIDE has bounded estimation error compared with the optimal estimates obtained without privacy constraints in the non-distributed setting. We confirm this empirically on real world and synthetic data sets. (c) 2018 John Wiley & Sons, Ltd.
更多
查看译文
关键词
high-dimensional data,large and complex data sets,machine learning,statistical learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要