Model-Free Safe Reinforcement Learning Through Neural Barrier Certificate

IEEE Robotics and Automation Letters(2023)

引用 5|浏览73
暂无评分
摘要
Safety is a critical concern when applying reinforcement learning (RL) to real-world control tasks. However, existing safe RL works either only consider expected safety constraint violations and fail to maintain safety guarantees, or use overly conservative safety certificate tools borrowed from safe control theory, which sacrifices reward optimization and relies on analytic system models. This letter proposes a model-free safe RL algorithm that achieves near-zero constraint violations with high rewards. Our key idea is to jointly learn a policy and a neural barrier certificate under stepwise state constraint setting. The barrier certificate is learned in a model-free manner by minimizing the violations of appropriate barrier properties on transition data collected by the policy. We extend the single-step invariant property of the barrier certificate to a multi-step version and construct the corresponding multi-step invariant loss. This loss balances the bias and variance of the barrier certificate and enhances both the safety and performance of the policy. The policy is optimized under the constraint of the multi-step invariant property using the Lagrangian method. We optimize the policy in a model-free manner by introducing an importance sampling weight in the constraint. We test our algorithm on multiple problems, including classic control tasks, robot collision avoidance, and autonomous driving. Results show that our algorithm achieves near-zero constraint violations and high performance compared to the baselines. Moreover, the learned barrier certificates successfully identify the feasible regions on multiple tasks.
更多
查看译文
关键词
Robot safety,reinforcement learning,neural barrier certificate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要