A decoder-based spike train metric for analyzing the neural code in the retina

Frontiers in Systems Neuroscience(2009)

引用 7|浏览56
暂无评分
摘要
Spike trains of sensory neuronal networks exhibit variability in response to repeated presenta- tions of the same stimulus. In the case of retina, especially, this variability is likely not due to computationally-relevant changes in the internal state of the retinal circuitry, and can be treated as noise. Thus the exact timing and relative configuration of spikes is not entirely relevant regarding the stimulus. Here, based on a Bayesian decoder, we devise a spike train metric which measures the distinguishability of two given sets of spike trains, and allows for quantifying the importance of different spikes and spike patterns in encoding various stimulus features. The decoder is based on a generalized linear model (GLM) (1) which accurately predicts how a group of neurons transform stimuli (spatiotemporal contrast fields) into spikes, and accounts for history dependencies and interactions between cells. The model has been fit to multi-electrode extracellular recordings from macaque retina (2,3). Given the observed spike trains of a population of cells, the decoded stimulus may be obtained by maximizing the posterior probability defined by the GLM. We define the distance between two spike trains to be the Euclidean distance between their associated decoded stimuli. This metric is by construction entirely determined by the properties of the spike generation model and the stimulus ensemble, and does not contain any free parameters of its own. By exploiting the likelihood concavity and temporal quasi-locality of the GLM, and properties of banded matrices, we have devised a novel method for finding the unique posterior maximum in a computational time scaling only linearly with the stimulus duration (4). This is crucial, as the calculation of the above metric could otherwise be forbidding. The Bayesian decoder is nonlinear, which means the stimulus information encoded by a spike is not fixed and depends on the local context. By looking at the distance between pairs of spike trains which differ solely in the absence/presence or timing of a single spike, we define the addition/removal cost and timing sensitivity of different spikes, and study their statistics in the recorded spike train data (4). The nonlinearity of the decoder results in large variations in these quantities for different spikes, in sharp contrast with a linear decoder which would instead yield constant values. We investigated different factors (e.g. the local firing rate, synchrony with spikes in neighboring cells, etc.) that influence the importance of spikes and timing sensitivity. In addition, we found that the relative cost of spike shifts vs. removals and additions exhibits less variability; on average jittering a spike time by 10± 2ms, was equivalent to removing it. Finally, we show that small lossy compressions of spike trains, which coarse-grain their different (collective or single spike) degrees of freedom optimally according to relevance, are dictated by a local version of our spike train metric. As an example, we observed that for nearly synchronous spikes in neighboring cells, the optimal compression retains the relative timing information with higher resolution than the center of mass timing. Interestingly, a linear decoder would yield a compression that blurs out both degrees of freedom equally.
更多
查看译文
关键词
degree of freedom,euclidean distance,lossy compression,posterior probability,center of mass,neural code,general linear model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要