A Cross-Scale Transformer and Triple-View Attention Based Domain-Rectified Transfer Learning for EEG Classification in RSVP Tasks

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING(2024)

引用 0|浏览6
暂无评分
摘要
Rapid serial visual presentation (RSVP)-based brain-computer interface (BCI) is a promising target detection technique by using electroencephalogram (EEG) signals. However, existing deep learning approaches seldom considered dependencies of multi-scale temporal features and discriminative multi-view spectral features simultaneously, which limits the representation learning ability of the model and undermine the EEG classification performance. In addition, recent transfer learning-based methods generally failed to obtain transferable cross-subject invariant representations and commonly ignore the individual-specific information, leading to the poor cross-subject transfer performance. In response to these limitations, we propose a cross-scale Transformer and triple-view attention based domain-rectified transfer learning (CST-TVA-DRTL) for the RSVP classification. Specially, we first develop a cross-scale Transformer (CST) to extract multi-scale temporal features and exploit the dependencies of different scales features. Then, a triple-view attention (TVA) is designed to capture spectral features from triple views of multi-channel time-frequency images. Finally, a domain-rectified transfer learning (DRTL) framework is proposed to simultaneously obtain transferable domain-invariant representations and untransferable domain-specific representations, then utilize domain-specific information to rectify domain-invariant representations to adapt to target data. Experimental results on two public RSVP datasets suggests that our CST-TVA-DRTL outperforms the state-of-the-art methods in the RSVP classification task.
更多
查看译文
关键词
Brain-computer interface,EEG,RSVP,transformer,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要