A model-based deep reinforcement learning approach to the nonblocking coordination of modular supervisors of discrete event systems.

Inf. Sci.(2023)

引用 1|浏览9
暂无评分
摘要
Modular supervisory control may lead to conflicts among the modular supervisors for large-scale discrete event systems. The existing methods for ensuring nonblocking control of modular supervisors either exploit favorable structures in the system model to guarantee the nonblocking property of modular supervisors or employ hierarchical model abstraction methods for reducing the computational complexity of designing a nonblocking coordinator. The nonblocking modular control problem is, in general, NP-hard. This study integrates supervisory control theory and a model-based deep reinforcement learning method to synthesize a nonblocking coordinator for the modular supervisors. The deep reinforcement learning method significantly reduces the computational complexity by avoiding the computation of synchronization of multiple modular supervisors and the plant models. The supervisory control function is approximated by the deep neural network instead of a large-sized finite automaton. Furthermore, the proposed model-based deep reinforcement learning method is more efficient than the standard deep Q network algorithm.
更多
查看译文
关键词
Deep reinforcement learning,Discrete event system,Local modular control,Supervisory control theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要