Graph Language Models
CoRR(2024)
摘要
While Language Models have become workhorses for NLP, their interplay with
textual knowledge graphs (KGs) - structured memories of general or domain
knowledge - is actively researched. Current embedding methodologies for such
graphs typically either (i) linearize graphs for embedding them using
sequential Language Models (LMs), which underutilize structural information, or
(ii) use Graph Neural Networks (GNNs) to preserve graph structure, while GNNs
cannot represent textual features as well as a pre-trained LM could. In this
work we introduce a novel language model, the Graph Language Model (GLM), that
integrates the strengths of both approaches, while mitigating their weaknesses.
The GLM parameters are initialized from a pretrained LM, to facilitate nuanced
understanding of individual concepts and triplets. Simultaneously, its
architectural design incorporates graph biases, thereby promoting effective
knowledge distribution within the graph. Empirical evaluations on relation
classification tasks on ConceptNet subgraphs reveal that GLM embeddings surpass
both LM- and GNN-based baselines in supervised and zero-shot settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要