SecGPT: An Execution Isolation Architecture for LLM-Based Systems
arxiv(2024)
摘要
Large language models (LLMs) extended as systems, such as ChatGPT, have begun
supporting third-party applications. These LLM apps leverage the de facto
natural language-based automated execution paradigm of LLMs: that is, apps and
their interactions are defined in natural language, provided access to user
data, and allowed to freely interact with each other and the system. These LLM
app ecosystems resemble the settings of earlier computing platforms, where
there was insufficient isolation between apps and the system. Because
third-party apps may not be trustworthy, and exacerbated by the imprecision of
the natural language interfaces, the current designs pose security and privacy
risks for users. In this paper, we propose SecGPT, an architecture for
LLM-based systems that aims to mitigate the security and privacy issues that
arise with the execution of third-party apps. SecGPT's key idea is to isolate
the execution of apps and more precisely mediate their interactions outside of
their isolated environments. We evaluate SecGPT against a number of case study
attacks and demonstrate that it protects against many security, privacy, and
safety issues that exist in non-isolated LLM-based systems. The performance
overhead incurred by SecGPT to improve security is under 0.3x for
three-quarters of the tested queries. To foster follow-up research, we release
SecGPT's source code at https://github.com/llm-platform-security/SecGPT.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要