Disease-Image Specific Generative Adversarial Network for Brain Disease Diagnosis with Incomplete Multi-modal Neuroimages

Lecture Notes in Computer Science(2019)

引用 33|浏览40
暂无评分
摘要
Incomplete data problem is unavoidable in automated brain disease diagnosis using multi-modal neuroimages (e.g., MRI and PET). To utilize all available subjects to train diagnostic models, deep networks have been proposed to directly impute missing neuroimages by treating all voxels in a 3D volume equally. These methods are not diagnosis-oriented, as they ignore the disease-image specific information conveyed in multi-modal neuroimages, i.e., (1) disease may cause abnormalities only at local brain regions, and (2) different modalities may highlight different disease-associated regions. In this paper, we propose a unified disease-image specific deep learning framework for joint image synthesis and disease diagnosis using incomplete multi-modal neuroimaging data. Specifically, by using the whole-brain images as input, we design a disease-image specific neural network (DSNN) to implicitly model disease-image specificity in MRI/PET scans using the spatial cosine kernel. Moreover, we develop a feature-consistent generative adversarial network (FGAN) to synthesize missing images, encouraging DSNN feature maps of synthetic images and their respective real images to be consistent. Our DSNN and FGAN can be jointly trained, by which missing images are imputed in a task-oriented manner for brain disease diagnosis. Experimental results on 1, 466 subjects suggest that our method not only generates reasonable neuroimages, but also achieves the state-of-the-art performance in both tasks of Alzheimer's disease (AD) identification and mild cognitive impairment (MCI) conversion prediction.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要