Shades of meaning: Natural language models offer insights and challenges to psychological understanding of lexical ambiguity

crossref(2022)

引用 0|浏览0
暂无评分
摘要
Lexical ambiguity presents a profound and enduring challenge to the language sciences. Researchers for decades have grappled with the problem of how language users learn, represent and process words with more than one meaning. Our work offers new insight into psychological understanding of lexical ambiguity through a series of simulations that capitalise on recent advances in natural language models. These models have no grounded understanding of the meanings of words at all; they simply learn to predict words based on the surrounding context provided by other words. Yet, our analyses show that their representations capture meaningful distinctions that align with lexicographic classifications and psychological theories of ambiguity. Likewise, our analyses provide initial evidence that these models capture the processing penalties that arise when ambiguous words are encountered in online sentence understanding. These simulations raise challenges for psychological theories of lexical ambiguity and suggest new avenues for research.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要