Voice and Face Gender Perception engages multimodal integration via multiple feedback pathways

biorxiv(2020)

引用 4|浏览10
暂无评分
摘要
Multimodal integration provides an ideal framework for investigating top-down influences in perceptual integration. Here, we investigate mechanisms and functional networks participating in face-voice multimodal integration during gender perception by using complementary behavioral (Maximum Likelihood Conjoint Measurement) and brain imaging (Dynamic Causal Modeling of fMRI data) techniques. Thirty-six subjects were instructed to judge pairs of face-voice stimuli either according to the gender of the face (face task), the voice (voice task) or the stimulus (stimulus task; no specific modality instruction given). Face and voice contributions to the tasks were not independent, as both modalities significantly contributed to all tasks. The top-down influences in each task could be modeled as a differential weighting of the contributions of each modality with an asymmetry in favor of the auditory modality in terms of magnitude of the effect. Additionally, we observed two independent interaction effects in the decision process that reflect both the coherence of the gender information across modalities and the magnitude of the gender difference from neutral. In a second experiment we investigated with functional MRI the modulation of effective connectivity between the Fusiform Face Area (FFA) and the Temporal Voice Area (TVA), two cortical areas implicated in face and voice processing. Twelve participants were presented with multimodal face-voice stimuli and instructed to attend either to face, voice or any gender information. We found specific changes in effective connectivity between these areas in the same conditions that generated behavioral interactions. Taken together, we interpret these results as converging evidence supporting the existence of multiple parallel hierarchical systems in multi-modal integration.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要