Safe Policies for Reinforcement Learning via Primal-Dual Methods

IEEE Transactions on Automatic Control(2023)

引用 7|浏览132
暂无评分
摘要
In this article, we study the design of controllers in the context of stochastic optimal control under the assumption that the model of the system is not available. This is, we aim to control a Markov decision process of which we do not know the transition probabilities, but we have access to sample trajectories through experience. We define safety as the agent remaining in a desired safe set with high probability during the operation time. The drawbacks of this formulation are twofold. The problem is nonconvex and computing the gradients of the constraints with respect to the policies is prohibitive. Hence, we propose an ergodic relaxation of the constraints with the following advantages. 1) The safety guarantees are maintained in the case of episodic tasks and they hold until a given time horizon for continuing tasks. 2) The constrained optimization problem despite its nonconvexity has arbitrarily small duality gap if the parametrization of the controller is rich enough. 3) The gradients of the Lagrangian associated with the safe learning problem can be computed using standard reinforcement learning results and stochastic approximation tools. Leveraging these advantages, we exploit primal-dual algorithms to find policies that are safe and optimal. We test the proposed approach in a navigation task in a continuous domain. The numerical results show that our algorithm is capable of dynamically adapting the policy to the environment and the required safety levels.
更多
查看译文
关键词
Safety,Trajectory,Reinforcement learning,Task analysis,Optimal control,Optimization,Markov processes,Autonomous systems,gradient methods,unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要