DPAR: Decoupled Graph Neural Networks with Node-Level Differential Privacy
WWW 2024(2024)
摘要
Graph Neural Networks (GNNs) have achieved great success in learning with
graph-structured data. Privacy concerns have also been raised for the trained
models which could expose the sensitive information of graphs including both
node features and the structure information. In this paper, we aim to achieve
node-level differential privacy (DP) for training GNNs so that a node and its
edges are protected. Node DP is inherently difficult for GNNs because all
direct and multi-hop neighbors participate in the calculation of gradients for
each node via layer-wise message passing and there is no bound on how many
direct and multi-hop neighbors a node can have, so existing DP methods will
result in high privacy cost or poor utility due to high node sensitivity. We
propose a Decoupled GNN with Differentially Private
Approximate Personalized PageRank (DPAR) for training GNNs
with an enhanced privacy-utility tradeoff. The key idea is to decouple the
feature projection and message passing via a DP PageRank algorithm which learns
the structure information and uses the top-K neighbors determined by the
PageRank for feature aggregation. By capturing the most important neighbors for
each node and avoiding the layer-wise message passing, it bounds the node
sensitivity and achieves improved privacy-utility tradeoff compared to
layer-wise perturbation based methods. We theoretically analyze the node DP
guarantee for the two processes combined together and empirically demonstrate
better utilities of DPAR with the same level of node DP compared with
state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要