Supervised contrastive learning for graph representation enhancement

Mohadeseh Ghayekhloo,Ahmad Nickabadi

Neurocomputing(2024)

引用 0|浏览0
暂无评分
摘要
Graph Neural Networks (GNNs) have exhibited significant success in various applications, but they face challenges when labeled nodes are limited. A novel self-supervised learning paradigm has emerged, enabling GNN training without labeled nodes and even surpassing GNNs with limited labeled data. However, self-supervised methods lack class-discriminative node representations due to the absence of labeled information during training. In this paper, we exploit a supervised graph contrastive learning approach (SGCL11The source code for SGCL is available at https://github.com/mgh790/SGCL.) framework to tackle the issue of limited labeled nodes, ensuring coherent grouping of nodes within the same class. We introduce augmentation techniques based on a novel centrality function to highlight important topological structures. Additionally, we inject noise into less informative node features, compelling the model to extract underlying semantic information. Our approach combines supervised contrastive loss and node similarity regularization while achieving consistent grouping of unlabeled nodes with labeled ones. Furthermore, we utilize the pseudo-labeling technique to propagate label information to distant nodes and address the underfitting problem, especially with low-degree nodes. Experimental results on real-world graphs demonstrate that SGCL outperforms both semi-supervised and self-supervised methods in node classification.
更多
查看译文
关键词
Graph neural networks,Supervised graph contrastive learning,Pseudo-labeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要