A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective
arxiv(2024)
摘要
Graph self-supervised learning is now a go-to method for pre-training graph
foundation models, including graph neural networks, graph transformers, and
more recent large language model (LLM)-based graph models. There is a wide
variety of knowledge patterns embedded in the structure and properties of
graphs which may be used for pre-training, but we lack a systematic overview of
self-supervised pre-training tasks from the perspective of graph knowledge. In
this paper, we comprehensively survey and analyze the pre-training tasks of
graph foundation models from a knowledge-based perspective, consisting of
microscopic (nodes, links, etc) and macroscopic knowledge (clusters, global
structure, etc). It covers a total of 9 knowledge categories and 25
pre-training tasks, as well as various downstream task adaptation strategies.
Furthermore, an extensive list of the related papers with detailed metadata is
provided at https://github.com/Newiz430/Pretext.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要