Recourse under Model Multiplicity via Argumentative Ensembling (Technical Report)
CoRR(2023)
摘要
Model Multiplicity (MM) arises when multiple, equally performing machine
learning models can be trained to solve the same prediction task. Recent
studies show that models obtained under MM may produce inconsistent predictions
for the same input. When this occurs, it becomes challenging to provide
counterfactual explanations (CEs), a common means for offering recourse
recommendations to individuals negatively affected by models' predictions. In
this paper, we formalise this problem, which we name recourse-aware ensembling,
and identify several desirable properties which methods for solving it should
satisfy. We show that existing ensembling methods, naturally extended in
different ways to provide CEs, fail to satisfy these properties. We then
introduce argumentative ensembling, deploying computational argumentation to
guarantee robustness of CEs to MM, while also accommodating customisable user
preferences. We show theoretically and experimentally that argumentative
ensembling satisfies properties which the existing methods lack, and that the
trade-offs are minimal wrt accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要