Multimodal Fusion of Brain Imaging Data: Methods and Applications

Machine Intelligence Research(2024)

引用 0|浏览11
暂无评分
摘要
Neuroimaging data typically include multiple modalities, such as structural or functional magnetic resonance imaging, diffusion tensor imaging, and positron emission tomography, which provide multiple views for observing and analyzing the brain. To leverage the complementary representations of different modalities, multimodal fusion is consequently needed to dig out both inter-modality and intra-modality information. With the exploited rich information, it is becoming popular to combine multiple modality data to explore the structural and functional characteristics of the brain in both health and disease status. In this paper, we first review a wide spectrum of advanced machine learning methodologies for fusing multimodal brain imaging data, broadly categorized into unsupervised and supervised learning strategies. Followed by this, some representative applications are discussed, including how they help to understand the brain arealization, how they improve the prediction of behavioral phenotypes and brain aging, and how they accelerate the biomarker exploration of brain diseases. Finally, we discuss some exciting emerging trends and important future directions. Collectively, we intend to offer a comprehensive overview of brain imaging fusion methods and their successful applications, along with the challenges imposed by multi-scale and big data, which arises an urgent demand on developing new models and platforms.
更多
查看译文
关键词
Multimodal fusion,supervised learning,unsupervised learning,brain atlas,cognition,brain disorders
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要