Ensembling to Leverage the Interpretability of Medical Image Analysis Systems.

IEEE Access(2023)

引用 0|浏览0
暂无评分
摘要
Along with the increase in the accuracy of artificial intelligence systems, complexity has also risen. Despite high accuracy, high-risk decision-making requires explanations about the model's decision, which often take the form of saliency maps. This work examines the efficacy of ensembling deep convolutional neural networks to leverage explanations, under the concept that ensemble models are combinatory informed. A novel approach is presented for aggregating saliency maps derived from multiple base models, as an alternative way of combining the different perspectives that several competent models offer. The proposed methodology lowers computation costs, while allowing for the combinations of maps of various origins. Following a saliency map evaluation scheme, four tests are performed over three image datasets, two medical image datasets and one generic. The results suggest that interpretability is improved by combining information through the aggregation scheme. The discussion that follows provides insights into the inner workings behind the results, such as the specific combination of the interpretability and ensemble methods, and offers useful suggestions for future work.
更多
查看译文
关键词
Medical images,computer vision,interpretability,explainability,DeepLIFT,ensemble models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要