Crowdsourcing Inference-Rule Evaluation.

ACL '12: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2(2012)

引用 50|浏览38
暂无评分
摘要
The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed "instance-based evaluation" method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.
更多
查看译文
关键词
inference-rule resource,substantial annotator training,training expert annotators,extensive work,high inter-annotator agreement,inference rule,instance-based evaluation,large amount,low cost,non-trivial task,inference-rule evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要