An Attacker's Dream? Exploring the Capabilities of ChatGPT for Developing Malware

Yin Minn Pa Pa, Shunsuke Tanizaki, Tetsui Kou,Michel van Eeten,Katsunari Yoshioka,Tsutomu Matsumoto

PROCEEDINGS OF 16TH CYBER SECURITY EXPERIMENTATION AND TEST WORKSHOP, CSET 2023(2023)

引用 8|浏览10
暂无评分
摘要
We investigate the potential for abuse of recent AI advances by developing seven malware programs and two attack tools using ChatGPT, OpenAI Playground's "text-davinci-003" model, and Auto-GPT-an open-source AI agent capable of generating automated prompts to accomplish user-defined goals. We confirm that: 1) Under the safety and moderation control of recent AI systems, it is possible to generate the functional malware and attack tools (up to about 400 lines of code) within 90 minutes, including the debugging time. 2) Auto-GPT does not ease the hurdle of generating the right prompts for malware generation, but it evades the safety controls of OpenAI with its automatically generated prompts. When given goals with sufficient details, it writes the code in nine of nine malware and attack tools we tested. 3) There is still room to improve the moderation and safety controls of ChatGPT and text-davinci-003 model, especially for the growing jailbreak prompts. Overall, we find that recent AI advances, including ChatGPT, Auto-GPT, and text-davinci-003, demonstrate the potential for generating malware and attack tools under safety and moderation control, highlighting the need for improved safety measures and enhanced safety controls in AI systems.
更多
查看译文
关键词
AI generated malware,ChatGPT abuses,Auto-GPT abuses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要