Learn to Disguise: Avoid Refusal Responses in LLM's Defense via a Multi-agent Attacker-Disguiser Game
arxiv(2024)
摘要
With the enhanced performance of large models on natural language processing
tasks, potential moral and ethical issues of large models arise. There exist
malicious attackers who induce large models to jailbreak and generate
information containing illegal, privacy-invasive information through techniques
such as prompt engineering. As a result, large models counter malicious
attackers' attacks using techniques such as safety alignment. However, the
strong defense mechanism of the large model through rejection replies is easily
identified by attackers and used to strengthen attackers' capabilities. In this
paper, we propose a multi-agent attacker-disguiser game approach to achieve a
weak defense mechanism that allows the large model to both safely reply to the
attacker and hide the defense intent. First, we construct a multi-agent
framework to simulate attack and defense scenarios, playing different roles to
be responsible for attack, disguise, safety evaluation, and disguise evaluation
tasks. After that, we design attack and disguise game algorithms to optimize
the game strategies of the attacker and the disguiser and use the curriculum
learning process to strengthen the capabilities of the agents. The experiments
verify that the method in this paper is more effective in strengthening the
model's ability to disguise the defense intent compared with other methods.
Moreover, our approach can adapt any black-box large model to assist the model
in defense and does not suffer from model version iterations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要