Security and Privacy Challenges of Large Language Models: A Survey
arxiv(2024)
摘要
Large Language Models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text,
language translation, and question-answering. Nowadays, LLM is becoming a very
popular tool in computerized language processing tasks, with the capability to
analyze complicated linguistic patterns and provide relevant and appropriate
responses depending on the context. While offering significant advantages,
these models are also vulnerable to security and privacy attacks, such as
jailbreaking attacks, data poisoning attacks, and Personally Identifiable
Information (PII) leakage attacks. This survey provides a thorough review of
the security and privacy challenges of LLMs for both training data and users,
along with the application-based risks in various domains, such as
transportation, education, and healthcare. We assess the extent of LLM
vulnerabilities, investigate emerging security and privacy attacks for LLMs,
and review the potential defense mechanisms. Additionally, the survey outlines
existing research gaps in this domain and highlights future research
directions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要