Not All Neighbors are Friendly: Learning to Choose Hop Features to Improve Node Classification

Conference on Information and Knowledge Management(2022)

引用 6|浏览9
暂无评分
摘要
ABSTRACTThe fundamental operation of Graph Neural Networks (GNNs) is the feature aggregation step performed over neighbors of the node based on the structure of the graph. In addition to its own features, the node gets additional combined features from its neighbors for each hop. These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification. However, in real-world data, features of neighbors at different hops may not correlate with the node's features. Thus, any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance. In this work, we show that selective aggregation leads to better performance than default aggregation on the node classification task. Furthermore, we propose Dual-Net GNN architecture with a classifier model and a selector model. The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for best performance. These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions. With extensive experiments, we show that our proposed model outperforms the state-of-the-art GNN models with remarkable improvements up to 27.8%.
更多
查看译文
关键词
Graph Neural Networks, Node Classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要