Accuracy-Tweakable Federated Learning with Minimal Interruption

2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP(2023)

引用 0|浏览3
暂无评分
摘要
Federated machine learning plays a significant role in pivotal industries such as health, finance and Internet-of-Things. Not needing to share training data makes it appealing for privacy preservation, which is crucial in these sectors. However, base-form federated learning does not offer adequate security guarantees, which has led to a diverse line of research to counteract the associated weaknesses. Existing work is inefficient when dealing with malicious clients, and accumulates varying levels of error from noisy intermediate models. In this paper, we present a federated learning protocol that utilizes a novel implementation of identifiable secret sharing together with learning with errors. The proposed protocol can effectively identify malicious clients, which allows only honest parameters to be obtained with minimal interruption to the training sequence. Furthermore, our protocol offers an accuracy tolerance mechanism that can be tweaked to suit the application. This prevents a residual noise level above the tolerance from degrading the intermediate model accuracy. In turn, it can ensure the end validation accuracy is above the desired level. We also show that our approach is lightweight on the clients, as we focus on efficient federated learning with smaller IoT devices.
更多
查看译文
关键词
Federated Learning,Minimal Disruption,Training Data,Internet Of Things Devices,Secret Sharing,Accuracy Tolerance,Corruption,System Of Equations,Level Of Accuracy,Global Model,Stochastic Gradient Descent,Privacy Issues,Weight Vector,Noise Variance,Collusion,Field Size,Finite Field,Lagrange Interpolation,Local Training,Training Round,Differential Privacy,Multi-party Computation,Security Parameter,Multiple Clients,Univariate Polynomial,Protocol Execution,Types Of Attacks,Cross-entropy Loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要