How Much Annotation is Needed to Compare Summarization Models?
CoRR(2024)
摘要
Modern instruction-tuned models have become highly capable in text generation
tasks such as summarization, and are expected to be released at a steady pace.
In practice one may now wish to choose confidently, but with minimal effort,
the best performing summarization model when applied to a new domain or
purpose. In this work, we empirically investigate the test sample size
necessary to select a preferred model in the context of news summarization.
Empirical results reveal that comparative evaluation converges quickly for both
automatic and human evaluation, with clear preferences for a system emerging
from under 100 examples. The human preference data allows us to quantify how
well automatic scores can reproduce preference rankings across a variety of
downstream summarization tasks. We find that, while automatic metrics are
stable at smaller sample sizes, only some automatic metrics are able to
moderately predict model win rates according to human preference.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要