Native judgments of non-native usage: experiments in preposition error detection
HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics(2008)
摘要
Evaluation and annotation are two of the greatest challenges in developing NLP instructional or diagnostic tools to mark grammar and usage errors in the writing of non-native speakers. Past approaches have commonly used only one rater to annotate a corpus of learner errors to compare to system output. In this paper, we show how using only one rater can skew system evaluation and then we present a sampling approach that makes it possible to evaluate a system more efficiently.
更多查看译文
关键词
native judgment,greatest challenge,diagnostic tool,non-native speaker,system output,preposition error detection,non-native usage,usage error,sampling approach,system evaluation,past approach,learner error,error detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络