Explanations of Deep Networks on EEG Data via Interpretable Approaches

2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)(2022)

引用 1|浏览15
暂无评分
摘要
Despite achieving success in many domains, deep learning models remain mostly black boxes. However, understanding the reasons behind predictions is quite important in assessing trust, which is fundamental in the EEG analysis task. In this work, we propose to use two representative explanation approaches, including LIME and Grad-CAM, to explain the predictions of a simple convolutional neural network on an EEG-based emotional brain-computer interface. Our results demonstrate the interpretability approaches provide the understanding of which features better discriminate the target emotions and provide insights into the neural processes involved in the model learned behaviors.
更多
查看译文
关键词
emotional BCI,EEG,interpretability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要