Modeling Survival in model-based Reinforcement Learning

2020 Second International Conference on Transdisciplinary AI (TransAI)(2020)

引用 0|浏览1
暂无评分
摘要
Although recent model-free reinforcement learning algorithms have been shown to be capable of mastering complicated decision-making tasks, the sample complexity of these methods has remained a hurdle to utilizing them in many real-world applications. In this regard, model-based reinforcement learning proposes some remedies. Yet, inherently, model-based methods are more computationally expensive and susceptible to sub-optimality. One reason is that model-generated data are always less accurate than real data, and this often leads to inaccurate transition and reward function models. With the aim to mitigate this problem, this work presents the notion of survival by discussing cases in which the agent's goal is to survive and its analogy to maximizing the expected rewards. To that end, a substitute model for the reward function approximator is introduced that learns to avoid terminal states rather than to maximize accumulated rewards from safe states. Focusing on terminal states, as a small fraction of state-space, reduces the training effort drastically. Next, a model-based reinforcement learning method is proposed (Survive) to train an agent to avoid dangerous states through a safety map model built upon temporal credit assignment in the vicinity of terminal states. Finally, the performance of the presented algorithm is investigated, along with a comparison between the proposed and current methods.
更多
查看译文
关键词
Reinforcement learning,Model Based Reinforcement learning,Survive,Risk map,Danger map,Survival
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要