ConfusionPrompt: Practical Private Inference for Online Large Language Models
CoRR(2023)
Abstract
State-of-the-art large language models (LLMs) are commonly deployed as online
services, necessitating users to transmit informative prompts to cloud servers,
thus engendering substantial privacy concerns. In response, we present
ConfusionPrompt, a novel private LLM inference framework designed to obfuscate
the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating
pseudo prompts along with the genuine sub-prompts as input to the online LLM.
Eventually, the returned responses can be recomposed by the user to obtain the
final whole response. Such designs endows our framework with advantages over
previous protocols that (i) it can be seamlessly integrated with existing
black-box LLMs, and (ii) it achieves significantly better privacy-utility
trade-off than existing text perturbation-based methods. We develop a
(λ, μ, ρ)-privacy model to formulate the requirement for a
privacy-preserving group of prompts, and provide a complexity analysis,
affirming ConfusionPrompt's efficiency. Our empirical evaluation reveals that
our method offers significantly higher utility compared to local inference
methods using open-source models and perturbation-based techniques, while also
requiring much less memory than open-source LLMs.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined