Reinforcement learning combined with model predictive control to optimally operate a flash separation unit

Dean Brandner, Torben Talis,Erik Esche,Jens‐Uwe Repke,Sergio Lucia

Computer-aided chemical engineering(2023)

引用 0|浏览0
暂无评分
摘要
Model predictive control (MPC) and reinforcement learning (RL) are two powerful optimal control methods. However, the performance of MPC depends mainly on the accuracy of the underlying model and the prediction horizon. Classic RL needs an excessive amount of data and cannot consider constraints explicitly. This work combines both approaches and uses Q-learning to improve the closed-loop performance of a parameterized MPC structure with a surrogate model and a short prediction horizon. The parameterized MPC structure provides a suitable starting point for RL training, which keeps the required data in a reasonable amount. Moreover, constraints are considered explicitly. The solution can be obtained in real-time due to the surrogate model and the short prediction horizon. The method is applied for control of a flash separation unit and compared to a MPC structure that uses a rigorous model and a large prediction horizon.
更多
查看译文
关键词
model predictive control,reinforcement learning,separation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要