An Experimental Study of Bias in Platform Worker Ratings: The Role of Performance Quality and Gender

CHI '20: CHI Conference on Human Factors in Computing Systems Honolulu HI USA April, 2020(2020)

引用 19|浏览129
暂无评分
摘要
We study how the ratings people receive on online labor platforms are influenced by their performance, gender, their rater's gender, and displayed ratings from other raters. We conducted a deception study in which participants collaborated on a task with a pair of simulated workers, who varied in gender and performance level, and then rated their performance. When the performance of paired workers was similar, low-performing females were rated lower than their male counterparts. Where there was a clear performance difference between paired workers, low-performing females were preferred over a similarly-performing male peer. Furthermore, displaying an average rating from other raters made ratings more extreme, resulting in high performing workers receiving significantly higher ratings and low performers lower ratings compared to when average ratings were absent. This work contributes an empirical understanding of when biases in ratings manifest, and offers recommendations for how online work platforms can counter these biases.
更多
查看译文
关键词
Digital Ratings, Gender Discrimination, Social Mimicry, Bias in Ratings, Bias in Gig Platforms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要