Explainability through uncertainty: Trustworthy decision-making with neural networks
European Journal of Operational Research(2024)
摘要
Uncertainty is a key feature of any machine learning model and is
particularly important in neural networks, which tend to be overconfident. This
overconfidence is worrying under distribution shifts, where the model
performance silently degrades as the data distribution diverges from the
training data distribution. Uncertainty estimation offers a solution to
overconfident models, communicating when the output should (not) be trusted.
Although methods for uncertainty estimation have been developed, they have not
been explicitly linked to the field of explainable artificial intelligence
(XAI). Furthermore, literature in operations research ignores the actionability
component of uncertainty estimation and does not consider distribution shifts.
This work proposes a general uncertainty framework, with contributions being
threefold: (i) uncertainty estimation in ML models is positioned as an XAI
technique, giving local and model-specific explanations; (ii) classification
with rejection is used to reduce misclassifications by bringing a human expert
in the loop for uncertain observations; (iii) the framework is applied to a
case study on neural networks in educational data mining subject to
distribution shifts. Uncertainty as XAI improves the model's trustworthiness in
downstream decision-making tasks, giving rise to more actionable and robust
machine learning systems in operations research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要