Parameter-Efficient Detoxification with Contrastive Decoding
CoRR(2024)
摘要
The field of natural language generation has witnessed significant
advancements in recent years, including the development of controllable text
generation techniques. However, controlling the attributes of the generated
text remains a challenge, especially when aiming to avoid undesirable behavior
such as toxicity. In this work, we introduce Detoxification Generator
(DETOXIGEN), an inference-time algorithm that steers the generation away from
unwanted styles. DETOXIGEN is an ensemble of a pre-trained language model
(generator) and a detoxifier. The detoxifier is trained intentionally on the
toxic data representative of the undesirable attribute, encouraging it to
generate text in that style exclusively. During the actual generation, we use
the trained detoxifier to produce undesirable tokens for the generator to
contrast against at each decoding step. This approach directly informs the
generator to avoid generating tokens that the detoxifier considers highly
likely. We evaluate DETOXIGEN on the commonly used REALTOXICITYPROMPTS
benchmark (Gehman et al., 2020) with various language models as generators. We
find that it significantly outperforms previous approaches in detoxification
metrics while not compromising on the generation quality. Moreover, the
detoxifier is obtained by soft prompt-tuning using the same backbone language
model as the generator. Hence, DETOXIGEN requires only a tiny amount of extra
weights from the virtual tokens of the detoxifier to be loaded into GPU memory
while decoding, making it a promising lightweight, practical, and
parameter-efficient detoxification strategy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要