Compensatory cross-modal effects of sentence context on visual word recognition in adults

READING AND WRITING(2021)

引用 2|浏览0
暂无评分
摘要
Reading involves mapping combinations of a learned visual code (letters) onto meaning. Previous studies have shown that when visual word recognition is challenged by visual degradation, one way to mitigate these negative effects is to provide "top–down" contextual support through a written congruent sentence context. Crowding is a naturally occurring visual phenomenon that impairs object recognition and also affects the recognition of written stimuli during reading. Thus, access to a supporting semantic context via a written text is vulnerable to the detrimental impact of crowding on letters and words. Here, we suggest that an auditory sentence context may provide an alternative source of semantic information that is not influenced by crowding, thus providing “top–down” support cross-modally. The goal of the current study was to investigate whether adult readers can cross-modally compensate for crowding in visual word recognition using an auditory sentence context. The results show a significant cross-modal interaction between the congruency of the auditory sentence context and visual crowding, suggesting that interactions can occur across multiple levels of processing and across different modalities to support reading processes. These findings highlight the need for reading models to specify in greater detail how top–down, cross-modal and interactive mechanisms may allow readers to compensate for deficiencies at early stages of visual processing.
更多
查看译文
关键词
Auditory sentence context, Crowding, Lexical decision, Orthographic processing, Word recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要