Glacier: glass-box transformer for interpretable dynamic neuroimaging.

Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing. ICASSP (Conference)(2023)

引用 0|浏览9
暂无评分
摘要
Deep learning models can perform as well or better than humans in many tasks, especially vision related. Almost exclusively, these models are used to perform classification or prediction. However, deep learning models are usually of black-box nature, and it is often difficult to interpret the model or the features. The lack of interpretability causes a restrain from applying deep learning to fields such as neuroimaging, where the results must be transparent, and interpretable. Therefore, we present a 'glass-box' deep learning model and apply it to the field of neuroimaging. Our model mixes spatial and temporal dimensions in succession to estimate dynamic connectivity between the brain's intrinsic networks. The interpretable connectivity matrices produced by our model result in beating state-of-the-art models on many tasks using multiple functional MRI datasets. More importantly, our model estimates task-based flexible connectivity matrices, unlike static methods such as Pearson's correlation coefficients.
更多
查看译文
关键词
Interpretable DL,neuroimaging,fMRI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要