Robust working memory in a two-dimensional continuous attractor network

COGNITIVE NEURODYNAMICS(2023)

引用 0|浏览1
暂无评分
摘要
Continuous bump attractor networks (CANs) have been widely used in the past to explain the phenomenology of working memory (WM) tasks in which continuous-valued information has to be maintained to guide future behavior. Standard CAN models suffer from two major limitations: the stereotyped shape of the bump attractor does not reflect differences in the representational quality of WM items and the recurrent connections within the network require a biologically unrealistic level of fine tuning. We address both challenges in a two-dimensional (2D) network model formalized by two coupled neural field equations of Amari type. It combines the lateral-inhibition-type connectivity of classical CANs with a locally balanced excitatory and inhibitory feedback loop. We first use a radially symmetric connectivity to analyze the existence, stability and bifurcation structure of 2D bumps representing the conjunctive WM of two input dimensions. To address the quality of WM content, we show in model simulations that the bump amplitude reflects the temporal integration of bottom-up and top-down evidence for a specific combination of input features. This includes the network capacity to transform a stable subthreshold memory trace of a weak input into a high fidelity memory representation by an unspecific cue given retrospectively during WM maintenance. To address the fine-tuning problem, we test numerically different perturbations of the assumed radial symmetry of the connectivity function including random spatial fluctuations in the connection strength. Different to the behavior of standard CAN models, the bump does not drift in representational space but remains stationary at the input position.
更多
查看译文
关键词
Continuous bump attractor, Two-dimensional neural field, Working memory, Memory fidelity, Robust neural integrator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要