Best-of-Venom: Attacking RLHF by Injecting Poisoned Preference Data
arxiv(2024)
摘要
Reinforcement Learning from Human Feedback (RLHF) is a popular method for
aligning Language Models (LM) with human values and preferences. RLHF requires
a large number of preference pairs as training data, which are often used in
both the Supervised Fine-Tuning and Reward Model training, and therefore
publicly available datasets are commonly used. In this work, we study to what
extent a malicious actor can manipulate the LMs generations by poisoning the
preferences, i.e., injecting poisonous preference pairs into these datasets and
the RLHF training process. We propose strategies to build poisonous preference
pairs and test their performance by poisoning two widely used preference
datasets. Our results show that preference poisoning is highly effective: by
injecting a small amount of poisonous data (1-5
can effectively manipulate the LM to generate a target entity in a target
sentiment (positive or negative). The findings from our experiments also shed
light on strategies to defend against the preference poisoning attack.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要