PPTIF: Privacy-Preserving Transformer Inference Framework for Language Translation

Yanxin Liu,Qianqian Su

IEEE ACCESS(2024)

引用 0|浏览2
暂无评分
摘要
The Transformer model has emerged as a prominent machine learning tool within the field of natural language processing. Nevertheless, running the Transformer model on resource-constrained devices presents a notable challenge that needs to be addressed. Although outsourcing services can significantly reduce the computational overhead associated with using the model, it also incurs privacy risks to the provider's proprietary model and the client's sensitive data. In this paper, we propose an efficient privacy-preserving Transformer inference framework (PPTIF) for language translation tasks based on three-party replicated secret-sharing techniques. PPTIF offers a secure approach for users to leverage Transformer-based applications, such as language translation, while maintaining the confidentiality of their original input and inference results, thereby preventing any disclosure to the cloud server. Meanwhile, PPTIF ensures robust protection for the model parameters, guaranteeing their integrity and confidentiality. In PPTIF, we design a series of interaction protocols to implement the secure computation of Transformer components, namely secure Encoder and secure Decoder. To improve the efficiency of PPTIF, we optimize the computation of the Scaled Dot-Product Attention (Transformer's core operation) under secret sharing, effectively reducing its computation and communication overhead. Compared with Privformer, the optimized Masked Multi-Head Attention achieves about 1.7x lower runtime and 2.3x lower communication. In total, PPTIF achieve about 1.3x lower runtime and 1.2x lower communication. The effectiveness and security of PPTIF have been rigorously evaluated through comprehensive theoretical analysis and experimental validation.
更多
查看译文
关键词
Computational modeling,Transformers,Cryptography,Neural networks,Data models,Protocols,Task analysis,Homomorphic encryption,Outsourcing,Privacy,Privacy-preserving,replicated secret-sharing,secure multi-party computation,secure outsourcing,transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要