Learning to Watermark LLM-generated Text via Reinforcement Learning
arxiv(2024)
摘要
We study how to watermark LLM outputs, i.e. embedding algorithmically
detectable signals into LLM-generated text to track misuse. Unlike the current
mainstream methods that work with a fixed LLM, we expand the watermark design
space by including the LLM tuning stage in the watermark pipeline. While prior
works focus on token-level watermark that embeds signals into the output, we
design a model-level watermark that embeds signals into the LLM weights, and
such signals can be detected by a paired detector. We propose a co-training
framework based on reinforcement learning that iteratively (1) trains a
detector to detect the generated watermarked text and (2) tunes the LLM to
generate text easily detectable by the detector while keeping its normal
utility. We empirically show that our watermarks are more accurate, robust, and
adaptable (to new attacks). It also allows watermarked model open-sourcing. In
addition, if used together with alignment, the extra overhead introduced is low
- only training an extra reward model (i.e. our detector). We hope our work can
bring more effort into studying a broader watermark design that is not limited
to working with a fixed LLM. We open-source the code:
https://github.com/xiaojunxu/learning-to-watermark-llm .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要