tinyBenchmarks: evaluating LLMs with fewer examples
CoRR(2024)
摘要
The versatility of large language models (LLMs) led to the creation of
diverse benchmarks that thoroughly test a variety of language models'
abilities. These benchmarks consist of tens of thousands of examples making
evaluation of LLMs very expensive. In this paper, we investigate strategies to
reduce the number of evaluations needed to assess the performance of an LLM on
several key benchmarks. For example, we show that to accurately estimate the
performance of an LLM on MMLU, a popular multiple-choice QA benchmark
consisting of 14K examples, it is sufficient to evaluate this LLM on 100
curated examples. We release evaluation tools and tiny versions of popular
benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical
analysis demonstrates that these tools and tiny benchmarks are sufficient to
reliably and efficiently reproduce the original evaluation results.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要