IDEA: Invariant Defense for Graph Adversarial Robustness
arxiv(2023)
摘要
Despite the success of graph neural networks (GNNs), their vulnerability to
adversarial attacks poses tremendous challenges for practical applications.
Existing defense methods suffer from severe performance decline under unseen
attacks, due to either limited observed adversarial examples or pre-defined
heuristics. To address these limitations, we analyze the causalities in graph
adversarial attacks and conclude that causal features are key to achieve graph
adversarial robustness, owing to their determinedness for labels and invariance
across attacks. To learn these causal features, we innovatively propose an
Invariant causal DEfense method against adversarial Attacks (IDEA). We derive
node-based and structure-based invariance objectives from an
information-theoretic perspective. IDEA ensures strong predictability for
labels and invariant predictability across attacks, which is provably a
causally invariant defense across various attacks. Extensive experiments
demonstrate that IDEA attains state-of-the-art defense performance under all
five attacks on all five datasets. The implementation of IDEA is available at
https://anonymous.4open.science/r/IDEA.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要