Quantifying Population-level Neural Tuning Functions Using Ricker Wavelets and the Bayesian Bootstrap.

Laura Ahumada,Christian Panitz,Caitlin Traiser, Faith Gilbert, Mingzhou Ding,Andreas Keil

bioRxiv : the preprint server for biology(2024)

引用 0|浏览0
暂无评分
摘要
Experience changes the tuning of sensory neurons, including neurons in retinotopic visual cortex, as evident from work in humans and non-human animals. In human observers, visuo-cortical re-tuning has been studied during aversive generalization learning paradigms, in which the similarity of generalization stimuli (GSs) with a conditioned threat cue (CS+) is used to quantify tuning functions. This work utilized pre-defined tuning shapes reflecting prototypical generalization (Gaussian) and sharpening (Difference-of-Gaussians) patterns. This approach may constrain the ways in which re-tuning can be characterized, for example if tuning patterns do not match the prototypical functions or represent a mixture of functions. The present study proposes a flexible and data-driven method for precisely quantifying changes in neural tuning based on the Ricker wavelet function and the Bayesian bootstrap. The method is illustrated using data from a study in which university students (n = 31) performed an aversive generalization learning task. Oriented gray-scale gratings served as CS+ and GSs and a white noise served as the unconditioned stimulus (US). Acquisition and extinction of the aversive contingencies were examined, while steady-state visual event potentials (ssVEP) and alpha-band (8-13 Hz) power were measured from scalp EEG. Results showed that the Ricker wavelet model fitted the ssVEP and alpha-band data well. The pattern of re-tuning in ssVEP amplitude across the stimulus gradient resembled a generalization (Gaussian) shape in acquisition and a sharpening (Difference-of-Gaussian) shape in an extinction phase. As expected, the pattern of re-tuning in alpha-power took the form of a generalization shape in both phases. The Ricker-based approach led to greater Bayes factors and more interpretable results compared to prototypical tuning models. The results highlight the promise of the current method for capturing the precise nature of visuo-cortical tuning functions, unconstrained by the exact implementation of prototypical a-priori models. Highlights:Tuning functions are a common way for describing sensory responses, primarily in the visual cortex.The quantification and interpretation of tuning functions has faced computational and conceptual problems.We demonstrated how the Ricker function can be used as a simple and interpretable way for measuring tuning functions.We applied a Ricker function together with a Bayesian Bootstrap approach across a gradient of stimulus features in a generalization conditioning task to characterize visual tuning in the human EEG data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要