Towards Safety and Helpfulness Balanced Responses via Controllable Large Language Models
arxiv(2024)
摘要
As large language models (LLMs) become easily accessible nowadays, the
trade-off between safety and helpfulness can significantly impact user
experience. A model that prioritizes safety will cause users to feel less
engaged and assisted while prioritizing helpfulness will potentially cause
harm. Possible harms include teaching people how to build a bomb, exposing
youth to inappropriate content, and hurting users' mental health. In this work,
we propose to balance safety and helpfulness in diverse use cases by
controlling both attributes in LLM. We explore training-free and fine-tuning
methods that do not require extra human annotations and analyze the challenges
of controlling safety and helpfulness in LLMs. Our experiments demonstrate that
our method can rewind a learned model and unlock its controllability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要