Fortune favors the invariant: Enhancing GNNs’ generalizability with Invariant Graph Learning

Guibin Zhang, Yiqiao Chen, Shiyu Wang,Kun Wang,Junfeng Fang

Knowledge-Based Systems(2024)

引用 0|浏览0
暂无评分
摘要
Generalizable and transferrable graph representation learning endows graph neural networks (GNN) with the ability to extrapolate potential test distributions. Nonetheless, current endeavors recklessly ascribe the demoralizing performance on a single entity (feature or edge) distribution shift and resort to uncontrollable augmentation. By inheriting the philosophy of Invariant graph learning (IGL), which characterizes a full graph as an invariant core subgraph (rationale) and a complementary trivial part (environment), we propose a universal operator termed InMvie to release GNN’s out-of-distribution generation potential. The advantages of our proposal can be attributed to two main factors: the comprehensive and customized insight on each local subgraph, and the systematical encapsulation of environmental interventions. Concretely, a rationale miner is designed to find a small subset of the input graph – rationale, which injects the model with feature invariance while filtering out the spurious patterns, i.e., environment. Then, we utilize systematic environment intervention to ensure the out-of-distribution awareness of the model. Our InMvie has been validated through experiments on both synthetic and real-world datasets, proving its superiority in terms of interpretability and generalization ability for node classification over the leading baselines.
更多
查看译文
关键词
Graph neural networks,Invariant Graph Learning,Out-of-distribution generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要