A rational account of human memory search

bioRxiv(2018)

引用 0|浏览22
暂无评分
摘要
Performing everyday tasks requires the ability to search through and retrieve past memories. A central paradigm to study human memory search is the semantic fluency task, where participants are asked to retrieve as many items as possible from a category (e.g. animals). Observed responses tend to be clustered semantically. To understand when our mind decides to switch from one cluster/patch to the next, recent work has proposed two competing mechanisms. Under the first switching mechanism, people make strategic decision to switch away from a depleted patch based on marginal value theorem, similar to optimal foraging in a spatial environment. The second switching mechanism demonstrates that similar behavior patterns can emerge using a random walk on a semantic network, without necessarily involving strategic switches. In the current work, instead of comparing competing switching mechanisms over observed human data, we propose a rational account of the problem by examining what would be the optimal patch-switching policy under the framework of reinforcement learning. The reinforcement learning agent, a Deep Q-Network (DQN), is built upon the random walk model and allows strategic switches based on features of the local semantic patch. After learning from rewards, the resulted policy of the agent gives rise to a third switching mechanism, which outperforms the previous two switching mechanisms. Our results provide theoretical justification of strategies used in human memory research, and shed light on how an optimal AI agent under realistic human constraints can generate hypothesis about human strategies in the same task.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要