Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients.

INFOCOM(2023)

Cited 0|Views18
No score
Abstract
Federated learning (FL) is a distributed machine learning technology that preserves data privacy. However, it has been shown to be vulnerable to gradient leakage attacks (GLA), which can reconstruct private training data from public gradients with an overwhelming probability. Nevertheless, these attacks either require modification of the FL model (analytics-based) or take a long time to converge (optimization-based) and fail in dealing with highly compressed gradients in practical FL systems. In this paper, we pioneer a generation-based GLA method called FGLA that can reconstruct batches of user data, forgoing the optimization process. Specifically, we design a feature separation technique that extracts the feature of each data in a batch and then generates user data directly. Extensive experiments on multiple image datasets demonstrate that FGLA can reconstruct user images in milliseconds with a batch size of 256 from highly compressed gradients (0.8% compression ratio or higher), thus substantially outperforming state-of-the-art methods.
More
Translated text
Key words
Federated learning,data privacy,gradient leakage attack,image generation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined