OpenEval: Benchmarking Chinese LLMs across Capability, Alignment and Safety
arxiv(2024)
摘要
The rapid development of Chinese large language models (LLMs) poses big
challenges for efficient LLM evaluation. While current initiatives have
introduced new benchmarks or evaluation platforms for assessing Chinese LLMs,
many of these focus primarily on capabilities, usually overlooking potential
alignment and safety issues. To address this gap, we introduce OpenEval, an
evaluation testbed that benchmarks Chinese LLMs across capability, alignment
and safety. For capability assessment, we include 12 benchmark datasets to
evaluate Chinese LLMs from 4 sub-dimensions: NLP tasks, disciplinary knowledge,
commonsense reasoning and mathematical reasoning. For alignment assessment,
OpenEval contains 7 datasets that examines the bias, offensiveness and
illegalness in the outputs yielded by Chinese LLMs. To evaluate safety,
especially anticipated risks (e.g., power-seeking, self-awareness) of advanced
LLMs, we include 6 datasets. In addition to these benchmarks, we have
implemented a phased public evaluation and benchmark update strategy to ensure
that OpenEval is in line with the development of Chinese LLMs or even able to
provide cutting-edge benchmark datasets to guide the development of Chinese
LLMs. In our first public evaluation, we have tested a range of Chinese LLMs,
spanning from 7B to 72B parameters, including both open-source and proprietary
models. Evaluation results indicate that while Chinese LLMs have shown
impressive performance in certain tasks, more attention should be directed
towards broader aspects such as commonsense reasoning, alignment, and safety.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要