FairSample: Training Fair and Accurate Graph Convolutional Neural Networks Efficiently
IEEE Transactions on Knowledge and Data Engineering(2024)
摘要
Fairness in Graph Convolutional Neural Networks (GCNs) becomes a more and
more important concern as GCNs are adopted in many crucial applications.
Societal biases against sensitive groups may exist in many real world graphs.
GCNs trained on those graphs may be vulnerable to being affected by such
biases. In this paper, we adopt the well-known fairness notion of demographic
parity and tackle the challenge of training fair and accurate GCNs efficiently.
We present an in-depth analysis on how graph structure bias, node attribute
bias, and model parameters may affect the demographic parity of GCNs. Our
insights lead to FairSample, a framework that jointly mitigates the three types
of biases. We employ two intuitive strategies to rectify graph structures.
First, we inject edges across nodes that are in different sensitive groups but
similar in node features. Second, to enhance model fairness and retain model
quality, we develop a learnable neighbor sampling policy using reinforcement
learning. To address the bias in node features and model parameters, FairSample
is complemented by a regularization objective to optimize fairness.
更多查看译文
关键词
Graph neural network,Sampling,Fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要