Development of audio-tactile temporal binding with and without vision

Journal of Vision(2023)

引用 0|浏览0
暂无评分
摘要
In every moment, our brain processes a multitude of sensory information that needs to be integrated, separated, and ordered in space and time to derive a coherent representation of the environment. Since the lack of one modality can modify the development of the remaining modalities in terms of both unisensory and multisensory processes, the present study aims at investigating the development of audio-tactile temporal processing with and without vision. We asked 20 sighted and 20 visually impaired children aged between 6 and 15 years old, and 15 sighted and 15 visually impaired adults, to perform an audio-tactile temporal order judgment task. Participants were presented with pairs of audio-tactile stimuli with different onset asynchronies and had to judge which stimulus appeared first. To explore the possible role of the relative spatial position from which the stimuli were presented, audio-tactile stimuli could be delivered from either the same hand or different hands of participants. The temporal binding window (TBW), a timeframe within which multiple stimuli are highly likely to be perceived as one, was extracted for each participant and compared across groups. Preliminary results showed that the TBW of sighted children was significantly wider than the age-matched visually impaired children, specifically when stimuli came from the same location in space. Extended TBW indicates more imprecise temporal coding of sensory stimuli. Instead, no differences were found between either the two groups of adults, and visually impaired children and adults. These findings support the hypothesis of cross-modal compensation after sensory deprivation. Visually impaired children may compensate for lack of vision by optimizing audio-tactile temporal binding earlier.
更多
查看译文
关键词
binding,audio-tactile
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要