Active Data Reconstruction Attacks in Vertical Federated Learning.

2023 IEEE International Conference on Big Data (BigData)(2023)

引用 0|浏览1
暂无评分
摘要
Vertical Federated Learning (VFL) stands out as a promising approach to safeguard privacy in collaborative machine learning, allowing multiple entities to jointly train models on vertically partitioned datasets without revealing private information. While recent years have seen substantial research on privacy vulnerabilities and defense strategies for VFL, the focus has primarily been on passive scenarios where attackers adhere to the protocol. This perspective undermines the practical threats since the attackers can deviate from the protocol to improve their inference capabilities. To address this gap, our study introduces two innovative data reconstruction attacks designed to compromise data privacy in an active setting. Essentially, both attacks modify the gradients computed during the training phase of VFL to breach privacy. Our first attack uses an Active Inversion Network exploiting a small portion of known data in the training set to coerce the passive participants into training an auto-encoder for the reconstruction of their private data. The second attack, Active Generative Network, utilizes the knowledge of the training data distribution to guide the system into training a conditional generative network (C-GAN) for feature inferences. Our experiments confirm the efficacy of both attacks in inferring private features from real-world datasets.
更多
查看译文
关键词
Vertical Federated Learning,Data Privacy,Data Reconstruction Attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要