Is Watermarking LLM-Generated Code Robust?
arxiv(2024)
摘要
We present the first study of the robustness of existing watermarking
techniques on Python code generated by large language models. Although existing
works showed that watermarking can be robust for natural language, we show that
it is easy to remove these watermarks on code by semantic-preserving
transformations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要