Does the Objective Matter? Comparing Training Objectives for Pronoun Resolution
Conference on Empirical Methods in Natural Language Processing(2020)
摘要
Hard cases of pronoun resolution have been used as a long-standing benchmark for commonsense reasoning. In the recent literature, pre-trained language models have been used to obtain state-of-the-art results on pronoun resolution. Overall, four categories of training and evaluation objectives have been introduced. The variety of training datasets and pre-trained language models used in these works makes it unclear whether the choice of training objective is critical. In this work, we make a fair comparison of the performance and seed-wise stability of four models that represent the four categories of objectives. Our experiments show that the objective of sequence ranking performs the best in-domain, while the objective of semantic similarity between candidates and pronoun performs the best out-of-domain. We also observe a seed-wise instability of the model using sequence ranking, which is not the case when the other objectives are used.
更多查看译文
关键词
pronoun resolution,training objectives,objective matter
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络