Computing variations of entropy and redundancy under nonlinear mappings not preserving the signal dimension: quantifying the efficiency of V1 cortex

Proceedings of Entropy 2021: The Scientific Tool of the 21st Century(2021)

引用 0|浏览6
暂无评分
摘要
In computational neuroscience, the Efficient Coding Hypothesis argues that the neural organization comes from the optimization of information-theoretic goals [Barlow Proc.Nat.Phys.Lab.59]. A way to confirm this requires the analysis of the statistical performance of biological systems that have not been statistically optimized [Renart et al. Science10, Malo&Laparra Neur.Comp.10, Foster JOSA18, Gomez-Villa&Malo J.Neurophysiol.19]. However, when analyzing the information-theoretic performance, cortical magnification in the retina-cortex pathway poses a theoretical problem. Cortical magnification stands for the increase the signal dimensionality [Cowey&Rolls Exp. Brain Res.74]. Conventional models based on redundant wavelets increase the dimension of the signal by 1 order of magnitude [Watson CVGIP87, Schwartz&Simoncelli Nat.Neurosci.01]. Such increase implies a problem to quantify the efficiency of the transforms. In fact, previous accounts of the information flow along physiological networks had to do some sort of approximation to deal with magnification, e.g. (1) using orthonormal wavelets or preserving dimension [Bethge JOSA06, Malo&Laparra Neur.Comp.10] , or (2) using a reference for the relations introduced by the redundant transform [Laparra&Malo JMLR10, Gomez-Villa&Malo J.Neurophysiol.19]. In this work, we address the information theoretic analysis of such nonlinear systems that do not preserve dimension using no approximation. On the one hand we derive the theory to compute variations of entropy and total correlation under such transforms, which involves the knowledge of the Jacobian of the system wrt the input. To that end, we use the analytical results in [Martinez&Malo PLOS18]. On the other hand, we compare such predictions with a recently proposed non-parametric estimator of information-theory measures: the Rotation-Based Iterative Gaussianization [Laparra&Malo IEEE Trans.Neur.Nets11, Johnson, Laparra&Malo ICML19]. Consistency between the results validate the theory and provide new insights into the visual neural function.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要