CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility

Guohai Xu,Jiayi Liu, Mi Yan, Haiying Xu,Jinghui Si, Zhongxiang Zhou,Yi Peng,Xing Gao,Jitao Sang,Rong Zhang,Zhang Ji, Feng Peng, Fang Huang,Jingren Zhou

arXiv (Cornell University)(2023)

引用 0|浏览5
暂无评分
摘要
With the rapid evolution of large language models (LLMs), there is a growing concern that they may pose risks or have negative social impacts. Therefore, evaluation of human values alignment is becoming increasingly important. Previous work mainly focuses on assessing the performance of LLMs on certain knowledge and reasoning abilities, while neglecting the alignment to human values, especially in a Chinese context. In this paper, we present CValues, the first Chinese human values evaluation benchmark to measure the alignment ability of LLMs in terms of both safety and responsibility criteria. As a result, we have manually collected adversarial safety prompts across 10 scenarios and induced responsibility prompts from 8 domains by professional experts. To provide a comprehensive values evaluation of Chinese LLMs, we not only conduct human evaluation for reliable comparison, but also construct multi-choice prompts for automatic evaluation. Our findings suggest that while most Chinese LLMs perform well in terms of safety, there is considerable room for improvement in terms of responsibility. Moreover, both the automatic and human evaluation are important for assessing the human values alignment in different aspects. The benchmark and code is available on ModelScope and Github.
更多
查看译文
关键词
chinese large language models,safety
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要