Extreme sensitivity of reservoir computing to small network disruptions

BMC Neuroscience(2015)

引用 0|浏览72
暂无评分
摘要
Recent computational models based on reservoir computing (RC) are gaining attention as plausible theories of cortical information processing. In these models, the activity of a recurrently connected population of neurons is sent to one or many read-out units through a linear transformation. These models can operate in a chaotic regime which has been proposed as a possible mechanism underlying sustained irregular activity observed in cortical areas [1,2]. Furthermore, models based on RC replicate the neural dynamics involved in decision making [3], interval timing [2], and motor control [1]. However, one biological constraint that has been overlooked in these models is their resistance to small connectivity perturbations such as failures in synaptic transmission, a phenomenon that occurs frequently in healthy circuits without causing any drastic functional changes. Here, we show that different implementations of RC display very little resistance to small synaptic disruptions and discuss the implications of such fragility for RC mechanisms that may be present in neural coding. With the FORCE [1] procedure, networks lost their ability to replicate a jagged sinusoidal signal after a single neuron was removed from the reservoir (Figure ​(Figure1A).1A). Networks with innate training [2] showed a similar effect on a timing task (Figure ​(Figure1B).1B). The lag in the timing and the noise in the output both increased monotonically as further neurons were removed (Figure 1C,D); networks reached random performance after ~1.5% of neurons were eliminated. After the suppression of a single neuron, the spectrum of the weight matrix was greatly disturbed and repeated trials displayed unreliable trajectories, as assessed with principal components analysis. When individual synapses were removed instead of neurons, networks reached random performance after ~0.5% of synapses from the reservoir were eliminated. While living neuronal circuits can withstand small synaptic disruptions without compromising task performance, our results suggest that such disruptions have a catastrophic impact on the behaviour of RC models. Retraining the read-out unit seems to be futile as it results as a completely new solution post retraining instead of a finer restructuration. These results cast doubt on the validity of a large class of models that claim to capture the neuronal mechanisms of cognitive and behavioral tasks.Figure 1Performance of damaged reservoirs of 1,000 neurons with FORCE and innate learning algorithms. A. Target signal (green, perfectly replicated with the originally trained network) and the trace of the same network after the removal of one neuron in its reservoir. ...
更多
查看译文
关键词
Single Neuron, Sinusoidal Signal, Neural Code, Random Performance, Chaotic Regime
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要