SNSE: State Novelty Sampling Exploration

2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS)(2022)

引用 0|浏览3
暂无评分
摘要
Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are summed directly as intrinsic rewards and extrinsic rewards. However intrinsic rewards are non-stationary, which directly contaminates extrinsic environmental rewards and changes the optimization objective of the policy to maximize the sum of intrinsic and extrinsic rewards. This could lead the agent to a mixture policy that neither conducts exploration nor task score fulfillment resolutely. This adopts a simple and generic perspective, where we explicitly disentangle extrinsic reward and intrinsic reward. Through the multiple sampling mechanism, our method, State Novelty Sampling Exploration (SNSE), cleverly decouples the intrinsic and extrinsic rewards, so that the two can take their respective roles. Letting intrinsic rewards directly guide the agent to explore novel samples during the exploration phase, and that our policy optimization goal is still to maximize extrinsic rewards. In sparse rewards environments, our experiments show that SNSE can improve the efficiency of exploring unknown states and improve the final performance of the policy. Under dense rewards, SNSE do not make the policy produce optimization bias and cause performance loss.
更多
查看译文
关键词
Agents exploration,Deep reinforcement 1 earning,Intrinsic rewards
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要