WeChat Mini Program
Old Version Features

Equipping Federated Graph Neural Networks with Structure-aware Group Fairness

23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023(2023)

Stevens Inst Technol

Cited 0|Views25
Abstract
Graph Neural Networks (GNNs) are used for graph data processing across various domains. Centralized training of GNNs often faces challenges due to privacy and regulatory issues, making federated learning (FL) a preferred solution in a distributed paradigm. However, GNNs may inherit biases from training data, causing these biases to propagate to the global model in distributed scenarios. To address this issue, we introduce $\mathrm{F}^{2}$GNN, a Fair Federated Graph Neural Network, to enhance group fairness. Recognizing that bias originates from both data and algorithms, $\mathrm{F}^{2}$GNN aims to mitigate both types of bias under federated settings. We offer theoretical insights into the relationship between data bias and statistical fairness metrics in GNNs. Building on our theoretical analysis, $\mathrm{F}^{2}$GNN features a fairness-aware local model update scheme and a fairness-weighted global model update scheme, considering both data bias and local model fairness during aggregation. Empirical evaluations show $\mathrm{F}^{2}$GNN outperforms SOTA baselines in fairness and accuracy.
More
Translated text
Key words
Graph Neural Networks,Federated Learning,Group Fairness
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined