Bayesian auditory scene synthesis explains human perception of illusions and everyday sounds

biorxiv(2023)

引用 0|浏览14
暂无评分
摘要
Perception has long been envisioned to use an internal model of the world to infer the causes of sensory signals. However, tests of inferential accounts of perception have been limited by computational intractability, as inference requires searching through complex hypothesis spaces. Here we revisit the idea of perception as inference in a world model, using auditory scene analysis as a case study. We applied contemporary computational tools to enable Bayesian inference in a structured generative model of auditory scenes. Model inferences accounted for many classic illusions. Unlike most previous accounts of auditory illusions, our model can be evaluated on any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable structure enable ‘rich falsification’, revealing additional assumptions about sound generation needed to explain perception. The results show how a single generative theory can account for the perception of both classic illusions and everyday sensory signals. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络