GCL: Contrastive learning instead of graph convolution for node classification

NEUROCOMPUTING(2023)

引用 1|浏览2
暂无评分
摘要
Contrastive learning as an effective representation learning technique has attracted tremendous attention due to its general success in downstream tasks. However, the theoretical explanations and quantitative experimental analyses of its generalization ability are still limited. These issues are pivotal yet challenging for improving both the interpretability and performance of contrastive learning. To address these issues, we first re-examine the least squares bias-variance decomposition and successfully derive GCL, a novel bias-variance decomposition with two optional generalized biases and one generalized variance. GCL is shown to be extendable to common contrastive learning models so that it can be utilized as a unified contrastive learning framework. Meanwhile, a surprising finding that the gradient descent of contrastive loss concerning feature representation is closely related to the message passing mechanism (graph convolution) of Graph Neural Networks (GNNs). The contrastive learning model called GCP is then proposed as a convincing implementation of GCL. GCP has a pure MLP-based structure and employs a conventional cross-entropy to reduce the bias between predictions and ground truth labels, two optional contrastive losses to optimize the variance of the model. Finally, extensive experiments demonstrate that the two biases proposed by GCL have their own merits; GCP achieves comparable or even better performance than GNNs in a more efficient and robust manner, its bias and variance meet the bias-variance tradeoff to some extent.(c) 2023 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Contrastive learning,Graph neural network,Bias-variance tradeoff,Generalization error
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要