Hands-On Tutorial: Probing Ml Models For Fairness With The What-If Tool And Shap

FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY(2020)

引用 0|浏览24
暂无评分
摘要
As more and more industries use machine learning, it's important to understand how these models make predictions, and where bias can be introduced in the process. In this tutorial we'll walk through two open source frameworks for analyzing your models from a fairness perspective. We'll start with the What-If Tool, a visualization tool that you can run inside a Python notebook to analyze an ML model. With the What-If Tool, you can identify dataset imbalances, see how individual features impact your model's prediction through partial dependence plots, and analyze human-centered ML models from a fairness perspective using various optimization strategies.Then we'll look at SHAP, a tool for interpreting the output of any machine learning model, and seeing how a model arrived at predictions for individual datapoints. We will then show how to use SHAP and the What-If Tool together. After the tutorial you'll have the skills to get started with both of these tools on your own datasets, and be better equipped to analyze your models from a fairness perspective.
更多
查看译文
关键词
Machine learning, Fairness, Data visualization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要