Safe Reinforcement Learning via Online Shielding

arxiv(2019)

引用 13|浏览71
暂无评分
摘要
Reinforcement learning is a promising approach to learning control policies for complex robotics tasks. A key challenge is ensuring safety of the learned control policy---e.g., that a walking robot does not fall over, or a quadcopter does not run into a wall. We focus on the setting where the dynamics are known, and the goal is to prove that a policy learned in simulation satisfies a given safety constraint. Existing approaches for ensuring safety suffer from a number of limitations---e.g., they do not scale to high-dimensional state spaces, or they only ensure safety for a fixed environment. We propose an approach based on shielding, which uses a backup controller to override the learned controller as necessary to ensure that safety holds. Rather than compute when to use the backup controller ahead-of-time, we perform this computation online. By doing so, we ensure that our approach is computationally efficient, and furthermore, can be used to ensure safety even in novel environments. We empirically demonstrate that our approach can ensure safety in experiments on cart-pole and on a bicycle with random obstacles.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要