Radial Basis Function Network with Differential Privacy

Future Generation Computer Systems(2022)

引用 4|浏览7
暂无评分
摘要
Differential privacy (DP) remains a potent solution to what is arguably the defining issue in machine learning: balancing user privacy with an ever-increasing need for data. Practitioners must respect privacy, especially in sensitive healthcare domains. DP strives towards this aim by adding noise to training data to occlude its origin and nature, and is ideal for multiple Neural Network (NN) types. This includes deep varieties that utilise multiple hidden layers, and shallow ones with single hidden layers such as the Radial Base Function Network (RBFN). The work herein explores DP within this context by devising a model that leverages Gaussian RBF parameters to add privacy during training. Our model’s efficacy is examined against two real and three synthetic datasets, with results showing reasonable trade-offs between accuracy and privacy. With high intra-class variation, we retained 100% accuracy for two synthetic datasets and a drop of only 1.72% for another. If privacy is prioritised with low intra-class variation, we achieved accuracy drops of 8%–23% with an inherited epsilon that never exceeds one, indicating a good privacy guarantee. We also show that timely training is achievable on a high-dimensional dataset consisting of 2M records and 170 features.
更多
查看译文
关键词
Machine Learning,Differential Privacy,Radial Basis Function Network,RBFN,Distributed Training,Ensemble Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要