Performance of Deaf Participants in an Abstract Visual Grammar Learning Task at Multiple Formal Levels: Evaluating the Auditory Scaffolding Hypothesis

COGNITIVE SCIENCE(2022)

引用 0|浏览12
暂无评分
摘要
Previous research has hypothesized that human sequential processing may be dependent upon hearing experience (the "auditory scaffolding hypothesis"), predicting that sequential rule learning abilities should be hindered by congenital deafness. To test this hypothesis, we compared deaf signer and hearing individuals' ability to acquire rules of different computational complexity in a visual artificial grammar learning task using sequential stimuli. As a group, deaf participants succeeded at all levels of the task; Bayesian analysis indicates that they successfully acquired each of several target grammars at ascending levels of the formal language hierarchy. Overall, these results do not support the auditory scaffolding hypothesis. However, age- and education-matched hearing participants did outperform deaf participants in two out of three tested grammars. We suggest that this difference may be related to verbal recoding strategies in the two groups. Any verbal recoding strategies used by the deaf signers would be less effective because they would have to use the same visual channel required for the experimental task.
更多
查看译文
关键词
Visual artificial grammar learning, Mildly context-sensitive grammars, Sequencing, Deafness, Auditory scaffolding hypothesis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要