Explainable domain transfer of distant supervised cancer subtyping model via imaging-based rules extraction.

Artificial intelligence in medicine(2023)

引用 1|浏览13
暂无评分
摘要
Image texture analysis has for decades represented a promising opportunity for cancer assessment and disease progression evaluation, evolving in a discipline, i.e., radiomics. However, the road to a complete translation into clinical practice is still hampered by intrinsic limitations. As purely supervised classification models fail in devising robust imaging-based biomarkers for prognosis, cancer subtyping approaches would benefit from the employment of distant supervision, for instance exploiting survival/recurrence information. In this work, we assessed, tested, and validated the domain-generality of our previously proposed Distant Supervised Cancer Subtyping model on Hodgkin Lymphoma. We evaluate the model performance on two independent datasets coming from two hospitals, comparing and analyzing the results. Although successful and consistent, the comparison confirmed the instability of radiomics due to an across-center lack of reproducibility, leading to explainable results in one center and poor interpretability in the other. We thus propose a Random Forest-based Explainable Transfer Model for testing the domain-invariance of imaging biomarkers extracted from retrospective cancer subtyping. In doing so, we tested the predictive ability of cancer subtyping in a validation and perspective setting, which led to successful results and supported the domain-generality of the proposed approach. On the other hand, the extraction of decision rules enables to draw of risk factors and robust biomarkers to inform clinical decisions. This work shows the potentialities of the Distant Supervised Cancer Subtyping model to be further evaluated in larger multi-center datasets, to reliably translate radiomics into medical practice. The code is available at this GitHub repository.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要