Graph Representation Learning with Adaptive Mixtures.

2021 International Conference on Data Mining Workshops (ICDMW)(2021)

引用 0|浏览2
暂无评分
摘要
Graph Neural Networks (GNNs) are the current state-of-the-art models in learning node representations for many predictive tasks on graphs. Typically, GNNs reuses the same set of model parameters across all nodes in the graph to improve the training efficiency and exploit the translationally-invariant properties in many datasets. However, the parameter sharing scheme prevents GNNs from distinguishing two nodes that are isomorphic and that the translation invariance property may not exhibit in real-world graphs. In this paper, we present Graph Representation Learning with Adaptive Mixtures (GRAM), a novel approach for learning node representations in a graph by introducing multiple independent GNN models and a trainable mixture distribution for each node. GRAM is a general framework that can be readily embedded into existing models for enhancing representation learning. It allows for adaptive node clustering with mixtures and joint optimization of all GNN models simultaneously. To achieve this, we develop an Expectation-Maximization algorithm with stochastic gradient descent in a scalable manner. Specifically, in the E-Step, we approximate the posterior probability distribution of the latent cluster membership based on the k-th independent GNN models in the last iteration. In the M-step, we update all GNN models and the mixture parameters based on the refined latent cluster membership for downstream tasks, e.g., node classification. To fully exploit the graph structural information, we further implement a regularization term on unlabeled nodes, which encourages adjacent nodes to have the same mixture distribution. By jointly optimizing the model in an end-to-end manner, GRAM learns to capture different abstract patterns on graphs for different node clusters, which yields discriminative representation for each node. We evaluate GRAM on five benchmark datasets with extensive experiments. GRAM is demonstrated to consistently boost state-of-the-art GNN variants in node classification tasks.
更多
查看译文
关键词
graph neural networks,expectation-maximization algorithm,clustering,mixture models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要