Privacy-Preserving Multiview Matrix Factorization for Recommender Systems.

IEEE Transactions on Artificial Intelligence(2024)

Cited 0|Views2
No score
Abstract
With an increasing focus on data privacy, there have been pilot studies on recommender systems in a federated learning (FL) framework, where multiple parties collaboratively train a model without sharing their data. Most of these studies assume that the conventional FL framework can fully protect user privacy. However, there are serious privacy risks in matrix factorization in federated recommender systems based on our study. This article first provides a rigorous theoretical analysis of the server reconstruction attack in four scenarios in federated recommender systems, followed by comprehensive experiments. The empirical results demonstrate that the FL server could infer users' information with accuracy $>80\%$ based on the uploaded gradients from FL nodes. The robustness analysis suggests that our reconstruction attack analysis outperforms the random guess by $>30\%$ under Laplace noises with $b\leq 0.5$ for all scenarios. Then, the article proposes a new privacy-preserving framework based on a threshold variant of homomorphic encryption, privacy-preserving multiview matrix factorization (PrivMVMF), to enhance user data privacy protection in federated recommender systems. The proposed PrivMVMF is successfully implemented and tested thoroughly with the MovieLens dataset.
More
Translated text
Key words
Data privacy,federated learning (FL),homomorphic encryption,recommender system
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined