UltraEval: A Lightweight Platform for Flexible and Comprehensive Evaluation for LLMs
CoRR(2024)
摘要
Evaluation is pivotal for honing Large Language Models (LLMs), pinpointing
their capabilities and guiding enhancements. The rapid development of LLMs
calls for a lightweight and easy-to-use framework for swift evaluation
deployment. However, due to the various implementation details to consider,
developing a comprehensive evaluation platform is never easy. Existing
platforms are often complex and poorly modularized, hindering seamless
incorporation into researcher's workflows. This paper introduces UltraEval, a
user-friendly evaluation framework characterized by lightweight,
comprehensiveness, modularity, and efficiency. We identify and reimplement
three core components of model evaluation (models, data, and metrics). The
resulting composability allows for the free combination of different models,
tasks, prompts, and metrics within a unified evaluation workflow. Additionally,
UltraEval supports diverse models owing to a unified HTTP service and provides
sufficient inference acceleration. UltraEval is now available for researchers
publicly [Website is at ].
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要