What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?

5TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2020)(2020)

引用 22|浏览49
暂无评分
摘要
We evaluate named entity representations of BERT-based NLP models by investigating their robustness to replacements from the same typed class in the input. We highlight that on several tasks while such perturbations are natural, state of the art trained models are surprisingly brittle. The brittleness continues even with the recent entity-aware BERT models. We also try to discern the cause of this non-robustness, considering factors such as tokenization and frequency of occurrence. Then we provide a simple method that ensembles predictions from multiple replacements while jointly modeling the uncertainty of type annotations and label predictions. Experiments on three NLP tasks show that our method enhances robustness and increases accuracy on both natural and adversarial datasets.
更多
查看译文
关键词
bert,entity representations,other name
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要