Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models
arxiv(2024)
摘要
Fine-tuning text-to-image models with reward functions trained on human
feedback data has proven effective for aligning model behavior with human
intent. However, excessive optimization with such reward models, which serve as
mere proxy objectives, can compromise the performance of fine-tuned models, a
phenomenon known as reward overoptimization. To investigate this issue in
depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which
comprises a diverse collection of text prompts, images, and human annotations.
Our evaluation of several state-of-the-art reward models on this benchmark
reveals their frequent misalignment with human assessment. We empirically
demonstrate that overoptimization occurs notably when a poorly aligned reward
model is used as the fine-tuning objective. To address this, we propose
TextNorm, a simple method that enhances alignment based on a measure of reward
model confidence estimated across a set of semantically contrastive text
prompts. We demonstrate that incorporating the confidence-calibrated rewards in
fine-tuning effectively reduces overoptimization, resulting in twice as many
wins in human evaluation for text-image alignment compared against the baseline
reward models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要