Towards a More Stable and General Subgraph Information Bottleneck

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

引用 0|浏览12
暂无评分
摘要
Graph Neural Networks (GNNs) have been widely applied to graph-structured data. However, the lack of interpretability impedes its practical deployment especially in high-risk areas such as medical diagnosis. Recently, the Information Bottleneck (IB) principle has been extended to GNNs to identify a compact subgraph that is most informative to class labels, which significantly improves the interpretability on decision. However, existing Graph Information Bottleneck (GIB) models are either unstable during the training (due to the difficulty of mutual information estimation) or only focus on a special kind of graph (e.g., brain networks) that suffer from poor generalization to general graph datasets with varying graph sizes. In this work, we extend the recently developed Brain Information Bottleneck (BrainIB) to general graphs by introducing matrix-based Rényi’s α-order mutual information to stablize the training; and by designing a novel mask strategy to deal with varying graph sizes such that the new method can also be used for social networks, molecules, etc. Extensive experiments on different types of graph datasets demonstrate the superior stability and generality of our model.
更多
查看译文
关键词
Graph Information Bottleneck,Generalization,Stability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要