Information-theoretic analyses of neural data to minimize the effect of researchers' assumptions in predictive coding studies

PLOS COMPUTATIONAL BIOLOGY(2023)

引用 1|浏览9
暂无评分
摘要
Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask whether confirmed predictions about inputs or predictions errors between internal predictions and inputs are passed on in a hierarchical neural system-while at the same time looking for the neural correlates of coding for errors and predictions. If we do not know exactly what a neural system predicts at any given moment, this results in a circular analysis-as has been criticized correctly. To circumvent such circular analysis, we propose to express information processing strategies (such as predictive coding) by local information-theoretic quantities, such that they can be estimated directly from neural data. We demonstrate our approach by investigating two opposing accounts of predictive coding-like processing strategies, where we quantify the building blocks of predictive coding, namely predictability of inputs and transfer of information, by local active information storage and local transfer entropy. We define testable hypotheses on the relationship of both quantities, allowing us to identify which of the assumed strategies was used. We demonstrate our approach on spiking data collected from the retinogeniculate synapse of the cat (N = 16). Applying our local information dynamics framework, we are able to show that the synapse codes for predictable rather than surprising input. To support our findings, we estimate quantities applied in the partial information decomposition framework, which allow to differentiate whether the transferred information is primarily bottom-up sensory input or information transferred conditionally on the current state of the synapse. Supporting our local information-theoretic results, we find that the synapse preferentially transfers bottom-up information. Many neuroscience studies investigate how neural systems, e.g., neural circuits or cortical areas, process information. Popular theories propose that such systems operate by constantly predicting future sensory inputs from internal models built from previous inputs. Here, opposing accounts exist on how these predictions and actual inputs are reconciled to improve future predictions. One popular theory proposes that neural systems make use of prediction errors to update the internal model, while other theories propose that predominantly correctly predicted information is used. Testing which of these two strategies is actually employed by neural systems is conceptually difficult because it requires to define beforehand when a prediction error should occur and by what changes in the data such an occurrence should be indicated. This knowledge is typically not available, making it difficult to test both strategies against each other. Instead, we propose to use information-theoretic quantities to make central information processing concepts such as predictions and prediction errors measurable from data, which in turn allows to formulate testable hypotheses on information processing carried out by neural systems. We demonstrate our approach on data recorded at retinal synapses of the cat and successfully describe which of the two processing strategies is used.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要