Do arbitrary input–output mappings in parallel distributed processing networks require localist coding?

LANGUAGE COGNITION AND NEUROSCIENCE(2017)

引用 3|浏览1
暂无评分
摘要
The Parallel Distributed Processing (PDP) approach to cognitive modelling assumes that knowledge is distributed across multiple processing units. This view is typically justified on the basis of the computational advantages and biological plausibility of distributed representations. However, both these assumptions have been challenged. First, there is growing evidence that some neurons respond to information in a highly selective manner. Second, it has been demonstrated that localist representations are better suited for certain computational tasks. In this paper, we continue this line of research by investigating whether localist representations are learned in tasks involving arbitrary input-output mappings. The results imply that the pressure to learn local codes in such tasks is weak, but still there are conditions under which feed-forward PDP networks learn localist representation. Our findings further challenge the assumption that PDP modelling always goes hand in hand with distributed representations and provide directions for future research.
更多
查看译文
关键词
Localist representations,distributed representations,neural networks,PDP,arbitrary input-output mapping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要