Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications
arxiv(2024)
摘要
Given the sheer volume of surgical procedures and the significant rate of
postoperative fatalities, assessing and managing surgical complications has
become a critical public health concern. Existing artificial intelligence (AI)
tools for risk surveillance and diagnosis often lack adequate interpretability,
fairness, and reproducibility. To address this, we proposed an Explainable AI
(XAI) framework designed to answer five critical questions: why, why not, how,
what if, and what else, with the goal of enhancing the explainability and
transparency of AI models. We incorporated various techniques such as Local
Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations
(SHAP), counterfactual explanations, model cards, an interactive feature
manipulation interface, and the identification of similar patients to address
these questions. We showcased an XAI interface prototype that adheres to this
framework for predicting major postoperative complications. This initial
implementation has provided valuable insights into the vast explanatory
potential of our XAI framework and represents an initial step towards its
clinical adoption.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要