Lack of perseveration in a reinforcement learning model induces estimation biases in the learning rate and inverse temperature

crossref(2022)

引用 0|浏览0
暂无评分
摘要
The influence of experience on decision-making is based mostly on outcome history but also on one’s own choice history. Reinforcement learning models can implement both processes. The influence of choice history, or perseveration, is a tendency to repeat one’s own decisions and is implemented as an explicit action autocorrelation term. However, this component is not always included in models. In the present study, we explored estimation biases caused by the lack of a perseveration term in the model on other parameters, particularly the learning rate and inverse temperature, which are critical parameters in many studies, including research in computational psychiatry. First, we examined potential estimation biases in parameters using a probabilistic learning task. This task enabled us to estimate learning rates for win and loss outcomes individually, as these outcomes have different volatility. If the model lacked a perseveration term, we observed larger estimation bias in the two learning rates and lower estimation bias in inverse temperature. The learning-rate bias slightly differed depending on the volatility of outcomes. Second, we conducted a series of simulations and found that estimation bias increased according to the magnitude of intrinsic action perseveration. Furthermore, failure to incorporate perseveration directly affected the estimation bias in learning rate and indirectly affected that in inverse temperature. In sum, this article clarifies the estimation biases in model parameters caused by failure to include perseveration and their mechanisms; we also emphasize the possible misinterpretations of results due to these biases. In addition to these model-misspecification issues, we discuss model parameter reliability, which can limit the interpretation of study results.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要