Identify Shifts of Word Semantics through Bayesian Surprise.

SIGIR(2018)

引用 3|浏览116
暂无评分
摘要
Much work has been done recently on learning word embeddings from large corpora, which attempts to find the coordinates of words in a static and high dimensional semantic space. In reality, such corpora often span a sufficiently long time period, during which the meanings of many words may have changed. The co-evolution of word meanings may also result in a distortion of the semantic space, making these static embeddings unable to accurately represent the dynamics of semantics. In this paper, we present a novel computational method to capture such changes and to model the evolution of word semantics. Distinct from existing approaches that learn word embeddings independently from time periods and then align them, our method explicitly establishes the stable topological structure of word semantics and identifies the surprising changes in the semantic space over time through a principled statistical method. Empirical experiments on large-scale real-world corpora demonstrate the effectiveness of the proposed approach, which outperforms the state-of-the-art by a large margin.
更多
查看译文
关键词
Word Embeddings,Semantic Shifts,Bayesian Surprise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要