Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Flavien Prost,Ben Packer,Jilin Chen,Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang,Tulsee Doshi, Tonia Osadebe, Lukasz Heldt,Ed H. Chi大牛学者,Alex Beutel


引用 0|浏览42
There has been a flurry of research in recent years on notions of fairness in ranking and recommender systems, particularly on how to evaluate if a recommender allocates exposure equally across groups of relevant items (also known as provider fairness). While this research has laid an important foundation, it gave rise to different approaches depending on whether relevant items are compared per-user/per-query or aggregated across users. Despite both being established and intuitive, we discover that these two notions can lead to opposite conclusions, a form of Simpson's Paradox. We reconcile these notions and show that the tension is due to differences in distributions of users where items are relevant, and break down the important factors of the user's recommendations. Based on this new understanding, practitioners might be interested in either notions, but might face challenges with the per-user metric due to partial observability of the relevance and user satisfaction, typical in real-world recommenders. We describe a technique based on distribution matching to estimate it in such a scenario. We demonstrate on simulated and real-world recommender data the effectiveness and usefulness of such an approach.
AI 理解论文