Renyi Differentially Private ADMM Based L1 Regularized Classification

arxiv(2019)

引用 0|浏览3
暂无评分
摘要
In this paper we present two new algorithms, to solve the L1 regularized classification problems, satisfying Renyi differential privacy. Both algorithms are ADMM based, so that the empirical risk minimization and L1 regularization steps are separated into two optimization problems, at each iteration. We utilize the stochastic ADMM approach, and use the recent Renyi differential privacy (RDP) technique to privatize the training data. One algorithm achieves differential privacy by gradient perturbation, with privacy amplified by sub-sampling; the other algorithm achieves differential privacy by model perturbation, which calculates the sensitivity and perturbs the model after each training epoch. We compared the performance of our algorithms with several baseline algorithms, on both real and simulated datasets, and the experiment results show that, under high level of privacy preserving, the first algorithm performs well in classification, and the second algorithm performs well in feature selection when data contains many irrelevant attributes.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要