Learning joint space-time-frequency features for EEG decoding on small labeled data.

Neural Networks(2019)

引用 66|浏览112
暂无评分
摘要
Brain–computer interfaces (BCIs), which control external equipment using cerebral activity, have received considerable attention recently. Translating brain activities measured by electroencephalography (EEG) into correct control commands is a critical problem in this field. Most existing EEG decoding methods separate feature extraction from classification and thus are not robust across different BCI users. In this paper, we propose to learn subject-specific features jointly with the classification rule. We develop a deep convolutional network (ConvNet) to decode EEG signals end-to-end by stacking time–frequency transformation, spatial filtering, and classification together. Our proposed ConvNet implements a joint space–time–frequency feature extraction scheme for EEG decoding. Morlet wavelet-like kernels used in our network significantly reduce the number of parameters compared with classical convolutional kernels and endow the features learned at the corresponding layer with a clear interpretation, i.e. spectral amplitude. We further utilize subject-to-subject weight transfer, which uses parameters of the networks trained for existing subjects to initialize the network for a new subject, to solve the dilemma between a large number of demanded data for training deep ConvNets and small labeled data collected in BCIs. The proposed approach is evaluated on three public data sets, obtaining superior classification performance compared with the state-of-the-art methods.
更多
查看译文
关键词
Brain–computer interfaces,Convolutional neural network,Joint space–time–frequency feature learning,Subject-to-subject weight transfer,Small labeled data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要