A Survey of Trojans in Neural Models of Source Code: Taxonomy and Techniques

CoRR(2023)

引用 0|浏览21
暂无评分
摘要
In this work, we study literature in Explainable AI and Safe AI to understand poisoning of neural models of code. In order to do so, we first establish a novel taxonomy for Trojan AI for code, and present a new aspect-based classification of triggers in neural models of code. Next, we highlight recent works that help us deepen our conception of how these models understand software code. Then we pick some of the recent, state-of-art poisoning strategies that can be used to manipulate such models. The insights we draw can potentially help to foster future research in the area of Trojan AI for code.
更多
查看译文
关键词
trojans,source code,neural models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络