Inductive biases of neural networks for generalization in spatial navigation

biorxiv(2022)

引用 0|浏览7
暂无评分
摘要
Artificial reinforcement learning agents that perform well in training tasks typically perform worse than animals in novel tasks. We propose one reason: generalization requires modular architectures like the brain. We trained deep reinforcement learning agents using neural architectures with various degrees of modularity in a partially observable navigation task. We found that highly modular architectures that largely separate computations of internal belief of state from action and value allow better generalization performance than agents with less modular architectures. Furthermore, the modular agent's internal belief is formed by combining prediction and observation, weighted by their relative uncertainty, suggesting that networks learn a Kalman filter-like belief update rule. Therefore, smaller uncertainties in observation than in prediction lead to better generalization to tasks with novel observable dynamics. These results exemplify the rationale of the brain's inductive biases and show how insights from neuroscience can inspire the development of artificial systems with better generalization. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要