Reinforcement Learning based Approximate Optimal Control of Nonlinear Systems using Carleman Linearization

2023 AMERICAN CONTROL CONFERENCE, ACC(2023)

引用 0|浏览3
暂无评分
摘要
We develop a policy iteration-based model-free reinforcement learning (RL) control for nonlinear systems with single input. First, Carleman linearization, a commonly used linearization technique in the Hilbert space, is applied to express the nonlinear system as an infinite-dimensional Carleman state-space model, followed by derivation of an online state-feedback RL controller using state and input data in this infinite-dimensional space. Next, the practicality of using any finite-order truncation of this controller, and the corresponding closed-loop stability of the nonlinear plant is established. Results are validated using two numerical examples, where we show how our proposed method provides solutions close to the optimal control resulting from the model-based Carleman controllers. We also compare our controller to alternative data-driven methods, showing its advantage in terms of shorter learning time.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要