Generative AI for pentesting: the good, the bad, the ugly

Eric Hilario,Sami Azam, Jawahar Sundaram, Khwaja Imran Mohammed,Bharanidharan Shanmugam

International Journal of Information Security(2024)

引用 0|浏览0
暂无评分
摘要
This paper examines the role of Generative AI (GenAI) and Large Language Models (LLMs) in penetration testing exploring the benefits, challenges, and risks associated with cyber security applications. Through the use of generative artificial intelligence, penetration testing becomes more creative, test environments are customised, and continuous learning and adaptation is achieved. We examined how GenAI (ChatGPT 3.5) helps penetration testers with options and suggestions during the five stages of penetration testing. The effectiveness of the GenAI tool was tested using a publicly available vulnerable machine from VulnHub. It was amazing how quickly they responded at each stage and provided better pentesting report. In this article, we discuss potential risks, unintended consequences, and uncontrolled AI development associated with pentesting.
更多
查看译文
关键词
Cyber security,Generative AI,Large language models,Penetration testing,ChatGPT 3.5
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要