Recall-oriented learning of named entities in Arabic Wikipedia

EACL(2012)

引用 70|浏览99
暂无评分
摘要
We consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which we have no labeled training data in the target domain. To facilitate evaluation, we obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. We train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. We then adapt our model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.
更多
查看译文
关键词
training data,target domain,self-training stage yield,unlabeled target-domain data,semisupervised domain adaptation,recall-oriented learning,standard category,poor target-domain recall,standard supervised learning,arabic wikipedia,favor recall,sequence model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要