Investigating Self-Supervised Deep Representations for EEG-based Auditory Attention Decoding
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)
摘要
Auditory Attention Decoding (AAD) algorithms play a crucial role in isolating
desired sound sources within challenging acoustic environments directly from
brain activity. Although recent research has shown promise in AAD using shallow
representations such as auditory envelope and spectrogram, there has been
limited exploration of deep Self-Supervised (SS) representations on a larger
scale. In this study, we undertake a comprehensive investigation into the
performance of linear decoders across 12 deep and 2 shallow representations,
applied to EEG data from multiple studies spanning 57 subjects and multiple
languages. Our experimental results consistently reveal the superiority of deep
features for AAD at decoding background speakers, regardless of the datasets
and analysis windows. This result indicates possible nonlinear encoding of
unattended signals in the brain that are revealed using deep nonlinear
features. Additionally, we analyze the impact of different layers of SS
representations and window sizes on AAD performance. These findings underscore
the potential for enhancing EEG-based AAD systems through the integration of
deep feature representations.
更多查看译文
关键词
auditory attention decoding,electroencephalogram (EEG),self-supervised speech representations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要