Deep Learning Human Mind for Automated Visual Classification

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2016)

引用 245|浏览179
暂无评分
摘要
What if we could effectively read the mind and transfer human visual capabilities to computer vision methods? In this paper, we aim at addressing this question by developing the first visual object classifier driven by human brain signals. In particular, we employ EEG data evoked by visual object stimuli combined with Recurrent Neural Networks (RNN) to learn a discriminative brain activity manifold of visual categories. Afterwards, we train a Convolutional Neural Network (CNN)-based regressor to project images onto the learned manifold, thus effectively allowing machines to employ human brain-based features for automated visual classification. We use a 32-channel EEG to record brain activity of seven subjects while looking at images of 40 ImageNet object classes. The proposed RNN based approach for discriminating object classes using brain signals reaches an average accuracy of about 40%, which outperforms existing methods attempting to learn EEG visual object representations. As for automated object categorization, our human brain-driven approach obtains competitive performance, comparable to those achieved by powerful CNN models, both on ImageNet and CalTech 101, thus demonstrating its classification and generalization capabilities. This gives us a real hope that, indeed, human mind can be read and transferred to machines.
更多
查看译文
关键词
human brain-driven approach,human mind,automated visual classification,human visual capabilities,computer vision methods,visual object classifier,human brain signals,visual object stimuli,RNN,discriminative brain activity,visual categories,visual object representations,automated object categorization,recurrent neural networks,convolutional neural network,ImageNet object classes,128-channel EEG,EEG visual object representations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要