WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models
arxiv(2024)
摘要
Watermarking generative-AI systems, such as LLMs, has gained considerable
interest, driven by their enhanced capabilities across a wide range of tasks.
Although current approaches have demonstrated that small, context-dependent
shifts in the word distributions can be used to apply and detect watermarks,
there has been little work in analyzing the impact that these perturbations
have on the quality of generated texts. Balancing high detectability with
minimal performance degradation is crucial in terms of selecting the
appropriate watermarking setting; therefore this paper proposes a simple
analysis framework where comparative assessment, a flexible NLG evaluation
framework, is used to assess the quality degradation caused by a particular
watermark setting. We demonstrate that our framework provides easy
visualization of the quality-detection trade-off of watermark settings,
enabling a simple solution to find an LLM watermark operating point that
provides a well-balanced performance. This approach is applied to two different
summarization systems and a translation system, enabling cross-model analysis
for a task, and cross-task analysis.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要