Solving the flexible job-shop scheduling problem through an enhanced deep reinforcement learning approach
CoRR(2023)
摘要
In scheduling problems common in the industry and various real-world
scenarios, responding in real-time to disruptive events is essential. Recent
methods propose the use of deep reinforcement learning (DRL) to learn policies
capable of generating solutions under this constraint. The objective of this
paper is to introduce a new DRL method for solving the flexible job-shop
scheduling problem, particularly for large instances. The approach is based on
the use of heterogeneous graph neural networks to a more informative graph
representation of the problem. This novel modeling of the problem enhances the
policy's ability to capture state information and improve its decision-making
capacity. Additionally, we introduce two novel approaches to enhance the
performance of the DRL approach: the first involves generating a diverse set of
scheduling policies, while the second combines DRL with dispatching rules (DRs)
constraining the action space. Experimental results on two public benchmarks
show that our approach outperforms DRs and achieves superior results compared
to three state-of-the-art DRL methods, particularly for large instances.
更多查看译文
关键词
scheduling policies,deep reinforcement learning,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要