How well do LSTM language models learn filler-gap dependencies?

Satoru Ozaki,Dan Yurovsky,Lori Levin

SCIL(2022)

引用 0|浏览4
暂无评分
摘要
This paper revisits the question of what LSTMs know about the syntax of filler-gap dependencies in English. One contribution of this paper is to adjust the metrics used by Wilcox et al. (2018) and show that their language models (LMs) learn embedded wh-questions – a kind of filler-gap dependencies – better than they originally claimed. Another contribution of this paper is to examine four additional fillergap dependency constructions to see whether LMs perform equally on all types of filler-gap dependencies. We find that different constructions are learned to different extents, and there is a correlation between performance and frequency of constructions in the Penn Treebank Wall Street Journal corpus.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要