Improving Interpretable Models based on Knowledge Distillation for ICU Mortality Prediction using Electronic Health Record.

Jaehoon Byun,Minkyu Kim,Jinho Kim

ICMHI(2023)

引用 0|浏览5
暂无评分
摘要
Recently, deep learning has shown good performance in various fields such as computer vision, natural language processing, and healthcare. However, deep learning-based models are processed as a black box model and has the disadvantage that it is difficult for humans to understand the reason of the results of the model. The interpretability of the model in the medical field, especially, is extremely important where decision making must be made based on strong evidence. Many studies on mortality prediction, which is a representative healthcare-related application that requires interpretability, have been solved through deep learning-based models. These deep learning show good predictive performance but have a disadvantage in that they have difficulty in providing interpretability for prediction results, while linear models in machine learning have high interpretability but low predictive performance. In this paper, we contribute to improve the performance of linear models using knowledge distillation method. We show the effects of first-order input features and second-order ones on mortality prediction with logistic regression and factorization machine, respectively. In addition, we show the interpretability of the model by visualizing the weight of the trained model. We expect that providing interpretability of the linear models through visualization could help humans intuitively to understand the reason of the results, and furthermore, it could improve the quality of healthcare provided to specific patients in hospitals.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要