Algorithmic Decision Making in the Presence of Unmeasured Confounding

arXiv: Methodology(2018)

引用 25|浏览68
暂无评分
摘要
On a variety of complex decision-making tasks, from doctors prescribing treatment to judges setting bail, machine learning algorithms have been shown to outperform expert human judgments. One complication, however, is that it is often difficult to anticipate the effects of algorithmic policies prior to deployment, making the decision to adopt them risky. In particular, one generally cannot use historical data to directly observe what would have happened had the actions recommended by the algorithm been taken. One standard strategy is to model potential outcomes for alternative decisions assuming that there are no unmeasured confounders (i.e., to assume ignorability). But if this ignorability assumption is violated, the predicted and actual effects of an algorithmic policy can diverge sharply. In this paper we present a flexible, Bayesian approach to gauge the sensitivity of predicted policy outcomes to unmeasured confounders. We show that this policy evaluation problem is a generalization of estimating heterogeneous treatment effects in observational studies, and so our methods can immediately be applied to that setting. Finally, we show, both theoretically and empirically, that under certain conditions it is possible to construct near-optimal algorithmic policies even when ignorability is violated. We demonstrate the efficacy of our methods on a large dataset of judicial actions, in which one must decide whether defendants awaiting trial should be required to pay bail or can be released without payment.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要