Potential-Based Reward Shaping For Intrinsic Motivation
CoRR(2024)
摘要
Recently there has been a proliferation of intrinsic motivation (IM)
reward-shaping methods to learn in complex and sparse-reward environments.
These methods can often inadvertently change the set of optimal policies in an
environment, leading to suboptimal behavior. Previous work on mitigating the
risks of reward shaping, particularly through potential-based reward shaping
(PBRS), has not been applicable to many IM methods, as they are often complex,
trainable functions themselves, and therefore dependent on a wider set of
variables than the traditional reward functions that PBRS was developed for. We
present an extension to PBRS that we prove preserves the set of optimal
policies under a more general set of functions than has been previously proven.
We also present Potential-Based Intrinsic Motivation (PBIM), a method for
converting IM rewards into a potential-based form that is useable without
altering the set of optimal policies. Testing in the MiniGrid DoorKey and Cliff
Walking environments, we demonstrate that PBIM successfully prevents the agent
from converging to a suboptimal policy and can speed up training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要