Privacy-Preserving and Verifiable Federated Learning Framework for Edge Computing

IEEE Transactions on Information Forensics and Security(2023)

引用 6|浏览101
In federated learning (FL), each client collaboratively trains the global model through the cloud server (CS) without sharing its original dataset in edge computing. However, CS can analyze and forge the uploaded parameters and infer the privacy of clients, which calls for the necessity of verifying the integrity and protecting the privacy for aggregation. Although there are some works to ensure the verifiability of aggregation results, there is still a lack of work on analyzing the relationship between verification and dropout rate for edge computing. In this work, we propose privacy-preserving and verifiable federated learning (PVFL) with low communication and computation overhead for verification. We theoretically demonstrate that PVFL has three properties: 1) the communication overhead for verification is independent of the dropouts and the dimension of the parameter vector; 2) the computation overhead for verification is independent of the dropouts; 3) the value of the loss function is negatively correlated with the number of dropouts. Experimental results demonstrate the correctness of our theoretical results and practical performance with a high dropout rate, thereby facilitating the design of privacy-preserving and verifiable FL algorithms for edge computing with a high dimension of parameter vectors and a high dropout rate.
Federated learning,differential privacy,convergence performance,verification,edge computing
AI 理解论文
Chat Paper