Comparing Annotated Datasets for Named Entity Recognition in English Literature.

International Conference on Language Resources and Evaluation (LREC)(2022)

引用 0|浏览12
暂无评分
摘要
The growing interest in named entity recognition (NER) in various domains has led to the creation of different benchmark datasets, often with slightly different annotation guidelines. To better understand the different NER benchmark datasets for the domain of English literature and their impact on the evaluation of NER tools, we analyse two existing annotated datasets and create two additional gold standard datasets. Following on from this, we evaluate the performance of two NER tools, one domain-specific and one general-purpose NER tool, using the four gold standards, and analyse the sources for the differences in the measured performance. Our results show that the performance of the two tools varies significantly depending on the gold standard used for the individual evaluations.
更多
查看译文
关键词
named entity recognition, corpus annotation, annotated datasets, annotation guidelines, literature
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要