Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
arxiv(2024)
摘要
Learning from human feedback plays an important role in aligning generative
models, such as large language models (LLM). However, the effectiveness of this
approach can be influenced by adversaries, who may intentionally provide
misleading preferences to manipulate the output in an undesirable or harmful
direction. To tackle this challenge, we study a specific model within this
problem domain–contextual dueling bandits with adversarial feedback, where the
true preference label can be flipped by an adversary. We propose an algorithm
namely robust contextual dueling bandit (), which is based on
uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an
Õ(d√(T)+dC) regret bound, where T is the number of rounds, d
is the dimension of the context, and 0 ≤ C ≤ T is the total number of
adversarial feedback. We also prove a lower bound to show that our regret bound
is nearly optimal, both in scenarios with and without (C=0) adversarial
feedback. Additionally, we conduct experiments to evaluate our proposed
algorithm against various types of adversarial feedback. Experimental results
demonstrate its superiority over the state-of-the-art dueling bandit algorithms
in the presence of adversarial feedback.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要