A Verifiable and Privacy-Preserving Federated Learning Training Framework

Haohua Duan, Zedong Peng,Liyao Xiang, Yuncong Hu,Bo Li

IEEE Transactions on Dependable and Secure Computing(2024)

引用 0|浏览0
暂无评分
摘要
Federated learning allows multiple clients to collaboratively train a global model without revealing their private data. Despite its success in many applications, it remains a challenge to prevent malicious clients to corrupt the global model through uploading incorrect model updates. Hence, one critical issue arises in how to validate the training is truly conducted on legitimate neural networks. To address the issue, we propose VPNNT , a zero-knowledge proof scheme for neural network backpropagation. VPNNT enables each client to prove to others that the model updates (gradients) are indeed calculated on the global model of the previous round, without leaking any information about the client's private training data. Our proof scheme is generally applicable to any type of neural network. Different from conventional verification schemes constructing neural network operations by gate-level circuits, we improve verification efficiency by formulating the training process using custom gates — matrix operations, and apply an optimized linear time zero knowledge protocol for verification. Thanks to the recursive structure of neural network backward propagation, common custom gates are combined in verification thereby reducing prover and verifier costs over conventional zero knowledge proofs. Experimental results show that VPNNT is a lightweighted verification scheme for neural network backpropagation with an improved prove time, verification time and proof size.
更多
查看译文
关键词
Zero knowledge proofs,privacy preserving,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要