GAIN: Decentralized Privacy-Preserving Federated Learning

J. Inf. Secur. Appl.(2023)

引用 0|浏览5
暂无评分
摘要
Federated learning enables multiple participants to cooperatively train a model, where each participant computes gradients on its data and a coordinator aggregates gradients from participants to orchestrate training. To preserve data privacy, gradients need to be protected during training. Pairwise masking satisfies the requirement, which allows participants to blind gradients with masks and the coordinator to perform aggregation in the blinded field. However, the solution would leak aggregated results to external adversaries (e.g., an adversarial coordinator), which suffers from quantity inference attacks. Additionally, existing pairwise masking-based schemes rely on a central coordinator and are vulnerable to the single-point-of-failure problem. To address these issues, we propose a decentralized privacy-preserving federated learning scheme called GAIN. GAIN blinds gradients with masks and encrypts blinded gradients using additively homomorphic encryption, which ensures the confidentiality of gradients, and discloses nothing about aggregated results to external adversaries to resist quantity inference attacks. In GAIN, we design a derivation mechanism for generation of masks, where masks are derived from shared keys established by a single key agreement. The mechanism reduces the computation and communication costs of existing schemes. Furthermore, GAIN introduces smart contracts over blockchains to aggregate gradients in a decentralized manner, which addresses the single-point-of-failure problem. Smart contracts also provide verifiability for model training. We present security analysis to demonstrate the security of GAIN, and conduct comprehensive experiments to evaluate its performance.
更多
查看译文
关键词
Privacy-preserving,Federated learning,Decentralization,Smart contract,Blockchain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要