Reinforcement Learning-based Receding Horizon Control using Adaptive Control Barrier Functions for Safety-Critical Systems
CoRR(2024)
摘要
Optimal control methods provide solutions to safety-critical problems but
easily become intractable. Control Barrier Functions (CBFs) have emerged as a
popular technique that facilitates their solution by provably guaranteeing
safety, through their forward invariance property, at the expense of some
performance loss. This approach involves defining a performance objective
alongside CBF-based safety constraints that must always be enforced.
Unfortunately, both performance and solution feasibility can be significantly
impacted by two key factors: (i) the selection of the cost function and
associated parameters, and (ii) the calibration of parameters within the
CBF-based constraints, which capture the trade-off between performance and
conservativeness.
propose a Reinforcement Learning (RL)-based Receding Horizon Control (RHC)
approach leveraging Model Predictive Control (MPC) with CBFs (MPC-CBF). In
particular, we parameterize our controller and use bilevel optimization, where
RL is used to learn the optimal parameters while MPC computes the optimal
control input. We validate our method by applying it to the challenging
automated merging control problem for Connected and Automated Vehicles (CAVs)
at conflicting roadways. Results demonstrate improved performance and a
significant reduction in the number of infeasible cases compared to traditional
heuristic approaches used for tuning CBF-based controllers, showcasing the
effectiveness of the proposed method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要