A Case Study of Large Language Models (ChatGPT and CodeBERT) for Security-Oriented Code Analysis
arxiv(2023)
摘要
LLMs can be used on code analysis tasks like code review, vulnerabilities
analysis and etc. However, the strengths and limitations of adopting these LLMs
to the code analysis are still unclear. In this paper, we delve into LLMs'
capabilities in security-oriented program analysis, considering perspectives
from both attackers and security analysts. We focus on two representative LLMs,
ChatGPT and CodeBert, and evaluate their performance in solving typical
analytic tasks with varying levels of difficulty. Our study demonstrates the
LLM's efficiency in learning high-level semantics from code, positioning
ChatGPT as a potential asset in security-oriented contexts. However, it is
essential to acknowledge certain limitations, such as the heavy reliance on
well-defined variable and function names, making them unable to learn from
anonymized code. For example, the performance of these LLMs heavily relies on
the well-defined variable and function names, therefore, will not be able to
learn anonymized code. We believe that the concerns raised in this case study
deserve in-depth investigation in the future.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要