FedShare: Secure Aggregation based on Additive Secret Sharing in Federated Learning.

IDEAS(2023)

引用 0|浏览4
暂无评分
摘要
Federated learning is a machine learning technique where multiple clients with local data collaborate in training a machine learning model. In FedAvg, the main federated learning algorithm, clients train machine learning models locally and share the trained model with the server. While the sensitive data will never be sent to the server, a malicious server can construct the original training data by having access to the clients’ models in each training round. Secure aggregation techniques such as cryptography, trusted execution environment, or differential privacy are used to solve this problem. However, these techniques incur computation and communication overhead or affect the model’s accuracy. In this paper, we consider a secure multi-party computation setup where clients use additive secret sharing to send their models to multiple servers. Our solution provides secure aggregation as long as there are at least two non-colluding servers. Moreover, we provide mathematical proof to show that the securely aggregated model at the end of each training round is exactly equal to the one provided by FedAvg without affecting accuracy and with efficient communication and computation. In comparison with SCOTCH, the state-of-the-art secure aggregation solution, experimental results show that our approach is 557% faster compared to SCOTCH and at the same time it reduces the communication cost of clients by 25%. Additionally, the accuracy of the trained model is exactly as FedAvg under balanced, unbalanced, IID, and Non-IID data distributions while it is only 8% slower.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要