Near-Optimal Control Of Motor Drives Via Approximate Dynamic Programming

2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC)(2019)

引用 4|浏览4
暂无评分
摘要
Data-driven methods for learning near-optimal control policies through approximate dynamic programming (ADP) have garnered widespread attention. In this paper, we investigate how data-driven control methods can be leveraged to imbue near-optimal performance in a core component in modern factory systems: the electric motor drive. We apply policy iteration-based ADP to an induction motor model in order to construct a state feedback control policy for a given cost functional. Approximate error convergence properties of policy iteration methods imply that the learned control policy is near-optimal. We demonstrate that carefully selecting a cost functional and initial control policy yields a near-optimal control policy that outperforms both a baseline nonlinear control policy based on backstepping, as well as the initial control policy.
更多
查看译文
关键词
learning near-optimal control policy,control design,backstepping,baseline nonlinear control policy,approximate error convergence properties,state feedback control policy,induction motor model,policy iteration-based ADP,electric motor drive,data-driven control methods,approximate dynamic programming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要