PRE: A Peer Review Based Large Language Model Evaluator
CoRR(2024)
摘要
The impressive performance of large language models (LLMs) has attracted
considerable attention from the academic and industrial communities. Besides
how to construct and train LLMs, how to effectively evaluate and compare the
capacity of LLMs has also been well recognized as an important yet difficult
problem. Existing paradigms rely on either human annotators or model-based
evaluators to evaluate the performance of LLMs on different tasks. However,
these paradigms often suffer from high cost, low generalizability, and
inherited biases in practice, which make them incapable of supporting the
sustainable development of LLMs in long term. In order to address these issues,
inspired by the peer review systems widely used in academic publication
process, we propose a novel framework that can automatically evaluate LLMs
through a peer-review process. Specifically, for the evaluation of a specific
task, we first construct a small qualification exam to select "reviewers" from
a couple of powerful LLMs. Then, to actually evaluate the "submissions" written
by different candidate LLMs, i.e., the evaluatees, we use the reviewer LLMs to
rate or compare the submissions. The final ranking of evaluatee LLMs is
generated based on the results provided by all reviewers. We conducted
extensive experiments on text summarization tasks with eleven LLMs including
GPT-4. The results demonstrate the existence of biasness when evaluating using
a single LLM. Also, our PRE model outperforms all the baselines, illustrating
the effectiveness of the peer review mechanism.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要