Interpretable functional specialization emerges in deep convolutional networks trained on brain signals

JOURNAL OF NEURAL ENGINEERING(2022)

引用 1|浏览17
暂无评分
摘要
Objective. Functional specialization is fundamental to neural information processing. Here, we study whether and how functional specialization emerges in artificial deep convolutional neural networks (CNNs) during a brain-computer interfacing (BCI) task. Approach. We trained CNNs to predict hand movement speed from intracranial electroencephalography (iEEG) and delineated how units across the different CNN hidden layers learned to represent the iEEG signal. Main results. We show that distinct, functionally interpretable neural populations emerged as a result of the training process. While some units became sensitive to either iEEG amplitude or phase, others showed bimodal behavior with significant sensitivity to both features. Pruning of highly sensitive units resulted in a steep drop of decoding accuracy not observed for pruning of less sensitive units, highlighting the functional relevance of the amplitude- and phase-specialized populations. Significance. We anticipate that emergent functional specialization as uncovered here will become a key concept in research towards interpretable deep learning for neuroscience and BCI applications.
更多
查看译文
关键词
motor decoding, intracranial EEG (iEEG), deep learning, brain-computer interface (BCI), neural network visualization, internal representation, explainable AI (XAI)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要