An image-computable model of speeded decision-making
arxiv(2024)
摘要
Evidence accumulation models (EAMs) are the dominant framework for modeling
response time (RT) data from speeded decision-making tasks. While providing a
good quantitative description of RT data in terms of abstract perceptual
representations, EAMs do not explain how the visual system extracts these
representations in the first place. To address this limitation, we introduce
the visual accumulator model (VAM), in which convolutional neural network
models of visual processing and traditional EAMs are jointly fitted to
trial-level RTs and raw (pixel-space) visual stimuli from individual subjects.
Models fitted to large-scale cognitive training data from a stylized flanker
task captured individual differences in congruency effects, RTs, and accuracy.
We find evidence that the selection of task-relevant information occurs through
the orthogonalization of relevant and irrelevant representations, demonstrating
how our framework can be used to relate visual representations to behavioral
outputs. Together, our work provides a probabilistic framework for both
constraining neural network models of vision with behavioral data and studying
how the visual system extracts representations that guide decisions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要