Exploring the Explainability of Machine Learning Algorithms for Prostate Cancer

2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)(2022)

引用 1|浏览11
暂无评分
摘要
Convolutional Neural Networks (CNNs) have become revolutionary for identification of clinically significant (CS) prostate cancer (PCA) on prostate MRI. These algorithms suffer from an explainability problem however, and few have been implemented in clinical practice. We investigated explainability and various normalization techniques for PCA by optimizing the computational attributes that correspond to those that radiologists typically consider, including contrast and homogeneity. An open-source ProstateX dataset consisting of 330 (254 non-clinically significant, 76 clinically-significant cancerous) T2-weighted prostate MRIs and a corresponding previously trained high accuracy CNN (a ResNet) were used to evaluate causes of high performance. Data were preprocessed to isolate differences in contrast and homogeneity through: (1) varying the contrast to be 50%, 75%, 100%, 150%, and 175% of the original image by varying alpha in the OpenCV module in Python, (2) Thresholding the image (where pixels < 45%, 50%, and 55% of the average were set to 0), and (3) applying Canny edge detection. The ResNet model was retrained on each preprocessed dataset with the same set of parameters as was used for the initial model, and performance was evaluated. Model results were evaluated through 5-fold cross validation. Model performance at baseline on the MRI data matched the performance of the original winners of the ProstateX challenge (AUC 0.84). Thresholding the image at 50% (AUC 0.87) and increasing contrast of the image to 1.5x the baseline (AUC 0.83) also combined with Canny Edge Detection (AUC 0.83) matched baseline performance, indicating these features were important to the original model. Decreasing the contrast by 0.75x or 0.50x universally decreased model performance. Use of three popular normalization techniques to isolate features of interest and possible explainability in a machine learning model for prostate cancer found that no normalized data was able to outperform a baseline model. However, several perturbations resulted in statistically significantly decreased performance. These results indicated that if contrast is reduced from a baseline or if Canny edge detection is used without increased whole-image contrast, model performance suffers. Future research into normalization techniques is recommended to continue to improve prostate cancer model performance.
更多
查看译文
关键词
Cancer,Machine Learning,Radiology,Tumor Prediction,Medical Imaging
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要