Effective Use of Context in Noisy Entity Linking.

EMNLP(2018)

引用 31|浏览66
暂无评分
摘要
To disambiguate between closely related concepts, entity linking systems need to effectively distill cues from a mention's textual context. We investigate several techniques for using these cues in the task of noisy entity linking on short texts. Our starting point is a state-of-the-art attention-based model from prior work; while this model's attention typically identifies context that is topically relevant, it fails to identify some of the most indicative context words, especially those exhibiting lexical overlap with the true title. Augmenting the model with convolutional networks over characters still leaves it largely unable to pick up on these cues compared to sparse features that target them directly, indicating that automatically learning how to identify relevant character-level context features is a hard problem. Armed with these sparse features, our final system(1) outperforms past work on the WikilinksNED test set by 2.8% absolute.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要