RewardBench: Evaluating Reward Models for Language Modeling
arxiv(2024)
摘要
Reward models (RMs) are at the crux of successful RLHF to align pretrained
models to human preferences, yet there has been relatively little study that
focuses on evaluation of those reward models. Evaluating reward models presents
an opportunity to understand the opaque technologies used for alignment of
language models and which values are embedded in them. To date, very few
descriptors of capabilities, training methods, or open-source reward models
exist. In this paper, we present RewardBench, a benchmark dataset and code-base
for evaluation, to enhance scientific understanding of reward models. The
RewardBench dataset is a collection of prompt-win-lose trios spanning chat,
reasoning, and safety, to benchmark how reward models perform on challenging,
structured and out-of-distribution queries. We created specific comparison
datasets for RMs that have subtle, but verifiable reasons (e.g. bugs, incorrect
facts) why one answer should be preferred to another. On the RewardBench
leaderboard, we evaluate reward models trained with a variety of methods, such
as the direct MLE training of classifiers and the implicit reward modeling of
Direct Preference Optimization (DPO), and on a spectrum of datasets. We present
many findings on propensity for refusals, reasoning limitations, and
instruction following shortcomings of various reward models towards a better
understanding of the RLHF process.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要