Model-free distributed optimal control for general discrete-time linear systems using reinforcement learning

Xinjun Feng,Zhiyun Zhao,Wen Yang

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL(2024)

引用 0|浏览0
暂无评分
摘要
This article proposes a novel data-driven framework of distributed optimal consensus for discrete-time linear multi-agent systems under general digraphs. A fully distributed control protocol is proposed by using linear quadratic regulator approach, which is proved to be a sufficient and necessary condition for optimal control of multi-agent systems through dynamic programming and minimum principle. Moreover, the control protocol can be constructed by using local information with the aid of the solution of the algebraic Riccati equation (ARE). Based on the Q-learning method, a reinforcement learning framework is presented to find the solution of the ARE in a data-driven way, in which we only need to collect information from an arbitrary follower to learn the feedback gain matrix. Thus, the multi-agent system can achieve distributed optimal consensus when system dynamics and global information are completely unavailable. For output feedback cases, accurate state information estimation is established such that optimal consensus control is realized. Moreover, the data-driven optimal consensus method designed in this article is applicable to general digraph that contains a directed spanning tree. Finally, numerical simulations verify the validity of the proposed optimal control protocols and data-driven framework.
更多
查看译文
关键词
discrete systems,LQR,model-free,optimal control,Q-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要