Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption
arxiv(2024)
摘要
The escalating focus on data privacy poses significant challenges for
collaborative neural network training, where data ownership and model
training/deployment responsibilities reside with distinct entities. Our
community has made substantial contributions to addressing this challenge,
proposing various approaches such as federated learning (FL) and
privacy-preserving machine learning based on cryptographic constructs like
homomorphic encryption (HE) and secure multiparty computation (MPC). However,
FL completely overlooks model privacy, and HE has limited extensibility
(confined to only one data provider). While the state-of-the-art MPC frameworks
provide reasonable throughput and simultaneously ensure model/data privacy,
they rely on a critical non-colluding assumption on the computing servers, and
relaxing this assumption is still an open problem.
In this paper, we present Pencil, the first private training framework for
collaborative learning that simultaneously offers data privacy, model privacy,
and extensibility to multiple data providers, without relying on the
non-colluding assumption. Our fundamental design principle is to construct the
n-party collaborative training protocol based on an efficient two-party
protocol, and meanwhile ensuring that switching to different data providers
during model training introduces no extra cost. We introduce several novel
cryptographic protocols to realize this design principle and conduct a rigorous
security and privacy analysis. Our comprehensive evaluations of Pencil
demonstrate that (i) models trained in plaintext and models trained privately
using Pencil exhibit nearly identical test accuracies; (ii) The training
overhead of Pencil is greatly reduced: Pencil achieves 10 260x higher
throughput and 2 orders of magnitude less communication than prior art; (iii)
Pencil is resilient against both existing and adaptive (white-box) attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要