On provable privacy vulnerabilities of graph representations
CoRR(2024)
摘要
Graph representation learning (GRL) is critical for extracting insights from
complex network structures, but it also raises security concerns due to
potential privacy vulnerabilities in these representations. This paper
investigates the structural vulnerabilities in graph neural models where
sensitive topological information can be inferred through edge reconstruction
attacks. Our research primarily addresses the theoretical underpinnings of
cosine-similarity-based edge reconstruction attacks (COSERA), providing
theoretical and empirical evidence that such attacks can perfectly reconstruct
sparse Erdos Renyi graphs with independent random features as graph size
increases. Conversely, we establish that sparsity is a critical factor for
COSERA's effectiveness, as demonstrated through analysis and experiments on
stochastic block models. Finally, we explore the resilience of (provably)
private graph representations produced via noisy aggregation (NAG) mechanism
against COSERA. We empirically delineate instances wherein COSERA demonstrates
both efficacy and deficiency in its capacity to function as an instrument for
elucidating the trade-off between privacy and utility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要