GGP - Glossary Guided Post-processing for Word Embedding Learning.

LREC(2020)

引用 0|浏览25
暂无评分
摘要
Word embedding learning is the task to map each word into a low-dimensional and continuous vector based on a large corpus. To enhance corpus based word embedding models, researchers utilize domain knowledge to learn more distinguishable representations via joint optimization and post-processing based models. However, joint optimization based models require much training time. Existing post-processing models mostly consider semantic knowledge so that learned embedding models show less functional information. Compared with semantic knowledge sources, glossary is a comprehensive linguistic resource which contains complete semantics. Previous glossary based post-processing method only processed words occurred in the glossary, and did not distinguish multiple senses of each word. In this paper, to make better use of glossary, we utilize attention mechanism to integrate multiple sense representations which are learned respectively. With measuring similarity between word representation and combined sense representation, we aim to capture more topical and functional information. We propose GGP (Glossary Guided Post-processing word embedding) model which consists of a global post-processing function to fine-tune each word vector, and an auto-encoding model to learn sense representations, furthermore, constrains each post-processed word representation and the composition of its sense representations to be similar. We evaluate our model by comparing it with two state-of-the-art models on six word topical/functional similarity datasets, and the results show that it outperforms competitors by an average of 4.1% across all datasets. And our model outperforms GloVe by more than 7%.
更多
查看译文
关键词
Word Embedding, Post-processing model, Representation Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要