A Comparative Study of Fairness in Medical Machine Learning.

ISBI(2023)

引用 0|浏览11
暂无评分
摘要
Although the applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, and age. In this paper, we perform a comparative study to detect the bias caused by imbalanced group representation in medical datasets. We investigate bias in medical imaging tasks for the following dataset: chest X-ray dataset (CXR lung segmentation) and Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this performance disparity, we explored different bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.
更多
查看译文
关键词
Medical machine learning, Responsible machine learning, fairness, medical image analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要