Study of Contribution Verifiability for Privacy-preserving Federated Learning

2021 International Conference on Security and Information Technologies with AI, Internet Computing and Big-data Applications(2022)

引用 0|浏览0
暂无评分
摘要
Base on the computing device revolution and big data, deep learning has been widely applied to solve any problem. However, originally neural network is not applicable to decentralized collaborative systems when data owners do not like to expose their own data. Nowadays, federated learning, which can train models decentralized, has brought attention since it only releases gradients. Nevertheless, to solve the problem that the shared gradient still retains some sensitive information of real data, privacy-preserving federated learning (PPFL) can aggregate the gradients submitted from collaborators without exposing any knowledge of these gradients. However, in the procedure of PPFL, some gradients may be dropped intentionally or unintentionally, such that collaborators cannot contribute effective gradients. In addition, checking for model tampering has become impossible in PPFL. This paper provides contribution verification that allows users to confirm that their own gradients are aggregated to the global model in PPFL. The proposed method is compatible with any gradient-descent-based federated learning.
更多
查看译文
关键词
Federated learning, Privacy protection, Artificial intelligence, Machine learning, Contribution verifiability, Orthogonal basis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要