On Model-Free Reinforcement Learning Of Reduced-Order Optimal Control For Singularly Perturbed Systems
2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC)(2018)
摘要
We propose a model-free reduced-order optimal control design for linear time-invariant singularly perturbed (SP) systems using reinforcement learning (RL). Both the state and input matrices of the plant model are assumed to be completely unknown. The only assumption imposed is that the model admits a similarity transformation that results in a SP representation. We propose a variant of Adaptive Dynamic Programming (ADP) that employs only the slow states of this SP model to learn a reduced-order adaptive optimal controller. The method significantly reduces the learning time, and complexity required for the feedback control by taking advantage of this model reduction. We use approximation theorems from singular perturbation theory to establish sub-optimality of the learned controller, and to guarantee closed-loop stability. We validate our results using two representative examples -one with a standard singularly perturbed dynamics, and the other with clustered multi-agent consensus dynamics. Both examples highlight various implementation details and effectiveness of the proposed approach.
更多查看译文
关键词
Reinforcement learning, adaptive dynamic programming, model reduction, model free control, singular perturbation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络