Patch-based deep multi-modal learning framework for Alzheimer’s disease diagnosis using multi-view neuroimaging

Biomedical Signal Processing and Control(2023)

引用 7|浏览19
暂无评分
摘要
The computer-aided diagnosis contributes to the early detection of mild cognitive impairment (MCI). Though many deep learning methods have achieved favorable performance on Alzheimer’s disease (AD) diagnosis tasks, the single-modality method will lead to lopsidedness in modeling disease features. Secondly, existing patch-based methods usually ignore the spatial information between local image patches when modeling the global feature representation of the brain. In addition, existing patch-based methods rely on anatomical landmark detection algorithms to pre-determine informative locations in the brain, which would result in the localization of brain atrophy locations that not only require extensive expert experience but also misses some potential lesion areas. In this paper, we propose a patch-based deep multi-modal learning (PDMML) framework for brain disease diagnosis. Specifically, we design a discriminative location discovery strategy to filter normal regions without prior knowledge. Multimodal imaging features were integrated at the patch level to capture multi-view brain disease representations. The local patches are further jointly learned to prevent the loss of spatial information caused by the direct flattening of the patches. Experimental results on 842 subjects from the ADNI dataset demonstrate that our proposed method excels in discriminative location discovery and brain disease diagnosis.
更多
查看译文
关键词
Alzheimer’s disease,Neuroimages,Discriminative lesion localization,Convolutional neural network,Multi-modal fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要