Online Policies for Real-Time Control Using MRAC-RL.

CDC(2021)

引用 2|浏览2
暂无评分
摘要
In this paper, we propose the Model Reference Adaptive Control & Reinforcement Learning (MRAC-RL) approach to developing online policies for systems in which modeling errors occur in real-time. Although reinforcement learning (RL) algorithms have been successfully used to develop control policies for dynamical systems, discrepancies between simulated dynamics and the true target dynamics can cause trained policies to fail to generalize and adapt appropriately when deployed in the real-world. The MRAC-RL framework generates online policies by utilizing an inner-loop adaptive controller together with a simulation-trained outer-loop RL policy. This structure allows MRAC-RL to adapt and operate effectively in a target environment, even when parametric uncertainties exists. We propose a set of novel MRAC algorithms, apply them to a class of nonlinear systems, derive the associated control laws, provide stability guarantees for the resulting closed-loop system, and show that the adaptive tracking objective is achieved. Using a simulation study of an automated quadrotor landing task, we demonstrate that the MRAC-RL approach improves upon state-of-the-art RL algorithms and techniques through the generation of online policies.
更多
查看译文
关键词
trained policies,MRAC-RL framework,online policies,inner-loop adaptive controller,simulation-trained outer-loop RL policy,novel MRAC algorithms,associated control laws,adaptive tracking objective,MRAC-RL approach,real-time Control,Model Reference Adaptive Control & Reinforcement Learning approach,reinforcement learning algorithms,control policies,dynamical systems,simulated dynamics,target dynamics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要