Robust Subgraph Learning by Monitoring Early Training Representations
arxiv(2024)
摘要
Graph neural networks (GNNs) have attracted significant attention for their
outstanding performance in graph learning and node classification tasks.
However, their vulnerability to adversarial attacks, particularly through
susceptible nodes, poses a challenge in decision-making. The need for robust
graph summarization is evident in adversarial challenges resulting from the
propagation of attacks throughout the entire graph. In this paper, we address
both performance and adversarial robustness in graph input by introducing the
novel technique SHERD (Subgraph Learning Hale through Early Training
Representation Distances). SHERD leverages information from layers of a
partially trained graph convolutional network (GCN) to detect susceptible nodes
during adversarial attacks using standard distance metrics. The method
identifies "vulnerable (bad)" nodes and removes such nodes to form a robust
subgraph while maintaining node classification performance. Through our
experiments, we demonstrate the increased performance of SHERD in enhancing
robustness by comparing the network's performance on original and subgraph
inputs against various baselines alongside existing adversarial attacks. Our
experiments across multiple datasets, including citation datasets such as Cora,
Citeseer, and Pubmed, as well as microanatomical tissue structures of cell
graphs in the placenta, highlight that SHERD not only achieves substantial
improvement in robust performance but also outperforms several baselines in
terms of node classification accuracy and computational complexity.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要