SacreROUGE: An Open-Source Library for Using and Developing Summarization Evaluation Metrics
Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)(2020)
摘要
We present SacreROUGE, an open-source library for using and developing summarization evaluation metrics. SacreROUGE removes many obstacles that researchers face when using or developing metrics: (1) The library provides Python wrappers around the official implementations of existing evaluation metrics so they share a common, easy-to-use interface; (2) it provides functionality to evaluate how well any metric implemented in the library correlates to human-annotated judgments, so no additional code needs to be written for a new evaluation metric; and (3) it includes scripts for loading datasets that contain human judgments so they can easily be used for evaluation. This work describes the design of the library, including the core Metric interface, the command-line API for evaluating summarization models and metrics, and the scripts to load and reformat publicly available datasets. The development of SacreROUGE is ongoing and open to contributions from the community.
更多查看译文
关键词
evaluation,metrics,open-source open-source,library
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络