An Empirical Study of Pre-trained Embedding on Ultra-Fine Entity Typing.

Yanping Wang,Xin Xin,Ping Guo

SMC(2020)

引用 1|浏览12
暂无评分
摘要
The embedding generated by pre-trained models has attracted the attention of many scholars in the past few years. Most of the context-sensitive embeddings have confirmed the positive impact on some basic tasks of classification, which have only a few types. In this paper, we make an empirical comparison of different pre-trained embeddings on the task of ultra-fine entity typing which has more than 10k types. We apply 7 kinds of pre-trained embedding to the typing model to prove whether the pre-trained embedding has a positive effect. The results indicate that almost all context-sensitive pre-trained embeddings improve the performance of models using Glove. The pre-trained embedding generated by BERT achieves the best performance in the Ultra-Fine dataset and OntoNotes dataset, which shows BERT has better capability to extract finer-grained information than other pre-trained models.
更多
查看译文
关键词
pre-trained embedding, entity typing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要