Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake Analysis
arXiv (Cornell University)(2023)
摘要
The rapid development of large language models (LLMs) has not only provided
numerous opportunities but also presented significant challenges. This becomes
particularly evident when LLMs inadvertently generate harmful or toxic content,
either unintentionally or because of intentional inducement. Existing alignment
methods usually direct LLMs toward the favorable outcomes by utilizing
human-annotated, flawless instruction-response pairs. Conversely, this study
proposes a novel alignment technique based on mistake analysis, which
deliberately exposes LLMs to erroneous content to learn the reasons for
mistakes and how to avoid them. In this case, mistakes are repurposed into
valuable data for alignment, effectively helping to avoid the production of
erroneous responses. Without external models or human annotations, our method
leverages a model's intrinsic ability to discern undesirable mistakes and
improves the safety of its generated responses. Experimental results reveal
that our method outperforms existing alignment approaches in enhancing model
safety while maintaining the overall utility.
更多查看译文
关键词
large language models,language models,setbacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要