See Through Their Minds: Learning Transferable Neural Representation from Cross-Subject fMRI
arxiv(2024)
摘要
Deciphering visual content from functional Magnetic Resonance Imaging (fMRI)
helps illuminate the human vision system. However, the scarcity of fMRI data
and noise hamper brain decoding model performance. Previous approaches
primarily employ subject-specific models, sensitive to training sample size. In
this paper, we explore a straightforward but overlooked solution to address
data scarcity. We propose shallow subject-specific adapters to map
cross-subject fMRI data into unified representations. Subsequently, a shared
deeper decoding model decodes cross-subject features into the target feature
space. During training, we leverage both visual and textual supervision for
multi-modal brain decoding. Our model integrates a high-level perception
decoding pipeline and a pixel-wise reconstruction pipeline guided by high-level
perceptions, simulating bottom-up and top-down processes in neuroscience.
Empirical experiments demonstrate robust neural representation learning across
subjects for both pipelines. Moreover, merging high-level and low-level
information improves both low-level and high-level reconstruction metrics.
Additionally, we successfully transfer learned general knowledge to new
subjects by training new adapters with limited training data. Compared to
previous state-of-the-art methods, notably pre-training-based methods (Mind-Vis
and fMRI-PTE), our approach achieves comparable or superior results across
diverse tasks, showing promise as an alternative method for cross-subject fMRI
data pre-training. Our code and pre-trained weights will be publicly released
at https://github.com/YulongBonjour/See_Through_Their_Minds.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要