Explanatory subgraph attacks against Graph Neural Networks

Huiwei Wang, Tianhua Liu, Ziyu Sheng,Huaqing Li

NEURAL NETWORKS(2024)

引用 0|浏览2
暂无评分
摘要
Graph Neural Networks (GNNs) are often viewed as black boxes due to their lack of transparency, which hinders their application in critical fields. Many explanation methods have been proposed to address the interpretability issue of GNNs. These explanation methods reveal explanatory information about graphs from different perspectives. However, the explanatory information may also pose an attack risk to GNN models. In this work, we will explore this problem from the explanatory subgraph perspective. To this end, we utilize a powerful GNN explanation method called SubgraphX and deploy it locally to obtain explanatory subgraphs from given graphs. Then we propose methods for conducting evasion attacks and backdoor attacks based on the local explainer. In evasion attacks, the attacker gets explanatory subgraphs of test graphs from the local explainer and replace their explanatory subgraphs with an explanatory subgraph of other labels, making the target model misclassify test graphs as wrong labels. In backdoor attacks, the attacker employs the local explainer to select an explanatory trigger and locate suitable injection locations. We validate the effectiveness of our proposed attacks on state -of -art GNN models and different datasets. The results also demonstrate that our proposed backdoor attack is more efficient, adaptable, and concealed than previous backdoor attacks.
更多
查看译文
关键词
Graph Neural Networks,Explainability,Adversarial attacks,Backdoor attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要