Learning orthographic and phonological representations in models of monosyllabic and bisyllabic naming

EUROPEAN JOURNAL OF COGNITIVE PSYCHOLOGY(2010)

引用 22|浏览2
暂无评分
摘要
Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, and Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthographic and phonological representations of variable-length words. The present research explored the use of sequence encoders in models of monosyllabic and bisyllabic word naming. Performance in these models is comparable to other models in terms of word and pseudoword naming accuracy, as well as accounting for naming latency phenomena. Although the models do not address all naming phenomena, the results suggest that sequence encoders can learn orthographic and phonological representations, making it easier to create models that scale up to larger vocabularies, while accounting for behavioural data.
更多
查看译文
关键词
Large-scale connectionist modelling,Word reading,Sequence encoder,Simple recurrent network,Lexical processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要