Neural Variational Gaussian Mixture Topic Model

ACM Transactions on Asian and Low-Resource Language Information Processing(2023)

引用 0|浏览32
暂无评分
摘要
Neural variational inference-based topic modeling has gained great success in mining abstract topics from documents. However, these topic models usually mainly focus on optimizing the topic proportions for documents, while the quality and the internal construction of topics are usually neglected. Specifically, these models lack the guarantee that semantically related words are supposed to be assigned to the same topic and are difficult to ensure the interpretability of topics. Moreover, many topical words recur frequently in the top words of different topics, which makes the learned topics semantically redundant and similar, and of little significance for further study. To solve the above problems, we propose a novel neural topic model called Neural Variational Gaussian Mixture Topic Model (NVGMTM). We use Gaussian distribution to depict the semantic relevance between words in the topics. Each topic in NVGMTM is considered as a multivariate Gaussian distribution over words in the word-embedding space. Thus, semantically related words share similar probabilities in each topic, which makes the topics more coherent and interpretable. Experimental results on two public corpora show the proposed model outperforms the state-of-the-art baselines.
更多
查看译文
关键词
Neural variational gaussian mixture topic model,topic quality,topic discrimination
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要