Learning allocentric representations of space for navigation.

Neurocomputing(2021)

引用 3|浏览13
暂无评分
摘要
The hippocampus of the mammalian brain supports spatial navigation by building cognitive maps of the environments in which the animal explores. Currently, there is little neurocomputational work investigating the encoding and decoding mechanisms of hippocampal neural representations in large-scale environments. We propose a biologically-inspired hierarchical neural network architecture to learn the transformation of egocentric sensorimotor inputs into allocentric spatial representation for navigation. The hierarchical network is composed of two parallel subnetworks mimicking the lateral entorhinal cortex (LEC) and medial entorhinal cortex (MEC), and one convergent subnetwork mimicking the hippocampus. LEC relays time-related visual information and MEC supplies space-related information in the form of multi-resolution grid codes as resulted from integrating movement information. The convergent subnetwork integrates all information from the parallel subnetworks and predicts the position of the agent in the environment. Synaptic weights of the vision-to-place and grid-to-place connections are learned based on the stochastic gradient descent algorithm. Simulations in a large virtual maze demonstrate that hippocampal place units in the model form multiple and irregularly-spaced place fields, similar to those observed in neurobiological experiments. The model is able to accurately decode the positions of the agent from the learned spatial representations. Moreover, the model is capable of adaptation to degraded visual inputs, and therefore is robust against perturbations. When the motion inputs are deprived, the model meets with localization difficulty, suffering from less accuracy in position predictions.
更多
查看译文
关键词
Deep learning,Localization,Large-scale environment,Place cells,Sensorimotor integration,HippDNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要