Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
CoRR(2023)
摘要
Learning high-quality Q-value functions plays a key role in the success of
many modern off-policy deep reinforcement learning (RL) algorithms. Previous
works focus on addressing the value overestimation issue, an outcome of
adopting function approximators and off-policy learning. Deviating from the
common viewpoint, we observe that Q-values are indeed underestimated in the
latter stage of the RL training process, primarily related to the use of
inferior actions from the current policy in Bellman updates as compared to the
more optimal action samples in the replay buffer. We hypothesize that this
long-neglected phenomenon potentially hinders policy learning and reduces
sample efficiency. Our insight to address this issue is to incorporate
sufficient exploitation of past successes while maintaining exploration
optimism. We propose the Blended Exploitation and Exploration (BEE) operator, a
simple yet effective approach that updates Q-value using both historical
best-performing actions and the current policy. The instantiations of our
method in both model-free and model-based settings outperform state-of-the-art
methods in various continuous control tasks and achieve strong performance in
failure-prone scenarios and real-world robot tasks.
更多查看译文
关键词
serendipity,past success,off-policy,actor-critic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要