Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications.

KDD : proceedings. International Conference on Knowledge Discovery & Data Mining(2023)

引用 6|浏览307
暂无评分
摘要
Model pre-training on large text corpora has been demonstrated effective for various downstream applications in the NLP domain. In the graph mining domain, a similar analogy can be drawn for pre-training graph models on large graphs in the hope of benefiting downstream graph applications, which has also been explored by several recent studies. However, no existing study has ever investigated the pre-training of text plus graph models on large heterogeneous graphs with abundant textual information (a.k.a. large graph corpora) and then fine-tuning the model on different related downstream applications with different graph schemas. To address this problem, we propose a framework of graph-aware language model pre-training (GaLM) on a large graph corpus, which incorporates large language models and graph neural networks, and a variety of fine-tuning methods on downstream applications. We conduct extensive experiments on Amazon's real internal datasets and large public datasets. Comprehensive empirical results and in-depth analysis demonstrate the effectiveness of our proposed methods along with lessons learned.
更多
查看译文
关键词
Large Language Model,Pre-Training and Fine-Tuning,Graph Neural Network,Heterogeneous Graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络