(Mal)Adaptive Learning After Switches Between Object-Based and Rule-Based Environments

Computational Brain & Behavior(2022)

引用 0|浏览6
暂无评分
摘要
In reinforcement-learning studies, the environment is typically object-based; that is, objects are predictive of a reward. Recently, studies also adopted rule-based environments in which stimulus dimensions are predictive of a reward. In the current study, we investigated how people learned (1) in an object-based environment, (2) following a switch to a rule-based environment, (3) following a switch to a different rule-based environment, and (4) following a switch back to an object-based environment. To do so, we administered a reinforcement-learning task comprising of four blocks with consecutively an object-based environment, a rule-based environment, another rule-based environment, and an object-based environment. Computational-modeling results suggest that people (1) initially adopt rule-based learning despite its suboptimal nature in an object-based environment, (2) learn rules after a switch to a rule-based environment, (3) experience interference from previously-learned rules following a switch to a different rule-based environment, and (4) learn objects after a final switch to an object-based environment. These results imply people have a hard time adjusting to switches between object-based and rule-based environments, although they do learn to do so.
更多
查看译文
关键词
Adaptivity,Hierarchical Bayesian modeling,Reinforcement learning,Strategy use
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要