Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation

Aiwei Liu, Haoping Bai, Zhiyun Lu,Xiang Kong, Simon Wang,Jiulong Shan,Meng Cao,Lijie Wen

CoRR(2024)

引用 0|浏览8
暂无评分
摘要
Aligning large language models (LLMs) with human expectations without human-annotated preference data is an important problem. In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive prompt pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive prompt pairs and calculate a self-rewarding score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-rewarding score. In the experimental stage, our DLMA method could surpass the method without relying on human-annotated preference data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要