SecFormer: Towards Fast and Accurate Privacy-Preserving Inference for Large Language Models
CoRR(2024)
摘要
With the growing use of large language models hosted on cloud platforms to
offer inference services, privacy concerns are escalating, especially
concerning sensitive data like investment plans and bank account details.
Secure Multi-Party Computing (SMPC) emerges as a promising solution to protect
the privacy of inference data and model parameters. However, the application of
SMPC in Privacy-Preserving Inference (PPI) for large language models,
particularly those based on the Transformer architecture, often leads to
considerable slowdowns or declines in performance. This is largely due to the
multitude of nonlinear operations in the Transformer architecture, which are
not well-suited to SMPC and are difficult to circumvent or optimize
effectively. To address this concern, we introduce an advanced optimization
framework called SecFormer, designed to strike an optimal balance between
performance and efficiency in PPI for Transformer models. By implementing
knowledge distillation techniques, we successfully eliminate the high-cost
exponential and maximum operations in PPI without sacrificing model
performance. Additionally, we have developed a suite of efficient SMPC
protocols that utilize segmented polynomials and Goldschmidt's method to handle
other complex nonlinear functions within PPI, such as GeLU, LayerNorm, and
Softmax. Our extensive experiments reveal that SecFormer outperforms MPCFormer
in performance, showing improvements of 5.6% and 24.2% for
BERT_BASE and BERT_LARGE, respectively. In terms of
efficiency, SecFormer is 3.4 and 3.2 times faster than Puma, demonstrating its
effectiveness and speed.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要