Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization.

CoRR(2023)

引用 0|浏览0
暂无评分
摘要
Despite recent advances, evaluating how well large language models (LLMs) follow user instructions remains an open problem. While evaluation methods of language models have seen a rise in prompt-based approaches, limited work on the correctness of these methods has been conducted. In this work, we perform a meta-evaluation of a variety of metrics to quantify how accurately they measure the instruction-following abilities of LLMs. Our investigation is performed on grounded query-based summarization by collecting a new short-form, real-world dataset riSum, containing $300$ document-instruction pairs with $3$ answers each. All $900$ answers are rated by $3$ human annotators. Using riSum, we analyze agreement between evaluation methods and human judgment. Finally, we propose new LLM-based reference-free evaluation methods that improve upon established baselines and perform on-par with costly reference-based metrics which require high-quality summaries.
更多
查看译文
关键词
better evaluation,case-study case-study,instruction-following
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要