Understanding and mitigating the limitations of prioritized experience replay.

International Conference on Uncertainty in Artificial Intelligence(2022)

引用 2|浏览42
暂无评分
摘要
Prioritized Experience Replay (ER) has been empirically shown to improve sample efficiency across many domains and attracted great attention; however, there is little theoretical understanding of why such prioritized sampling helps and its limitations. In this work, we take a deep look at the prioritized ER. In a supervised learning setting, we show the equivalence between the error-based prioritized sampling method for minimizing mean squared error and the uniform sampling for cubic power loss. We then provide theoretical insight into why error-based prioritized sampling improves convergence rate upon uniform sampling when minimizing mean squared error during early learning. Based on the insight, we further point out two limitations of the prioritized ER method: 1) outdated priorities and 2) insufficient coverage of the sample space. To mitigate the limitations, we propose our model-based stochastic gradient Langevin dynamics sampling method. We show that our method does provide states distributed close to an ideal prioritized sampling distribution estimated by the brute-force method, which does not suffer from the two limitations. We conduct experiments on both discrete and continuous control problems to show our approach’s efficacy and examine the practical implication of our method in an autonomous driving application.
更多
查看译文
关键词
prioritized experience
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要