Document Type
Article
Publication Title
arXiv
Abstract
This paper introduces a Reinforcement Learning approach to better generalize heuristic dispatching rules on the Job-shop Scheduling Problem (JSP). Current models on the JSP do not focus on generalization, although, as we show in this work, this is key to learning better heuristics on the problem. A well-known technique to improve generalization is to learn on increasingly complex instances using Curriculum Learning (CL). However, as many works in the literature indicate, this technique might suffer from catastrophic forgetting when transferring the learned skills between different problem sizes. To address this issue, we introduce a novel Adversarial Curriculum Learning (ACL) strategy, which dynamically adjusts the difficulty level during the learning process to revisit the worst-performing instances. This work also presents a deep learning model to solve the JSP, which is equivariant w.r.t. the job definition and size-agnostic. Conducted experiments on Taillard’s and Demirkol’s instances show that the presented approach significantly improves the current state-of-the-art models on the JSP. It reduces the average optimality gap from 19.35% to 10.46% on Taillard’s instances and from 38.43% to 18.85% on Demirkol’s instances. Our implementation is available online 2 © 2022, CC BY-NC-SA.
DOI
10.48550/arXiv.2206.04423
Publication Date
6-9-2022
Keywords
Deep learning, Job shop scheduling, Learning systems, Reinforcement learning, Catastrophic forgetting, Current modeling, Dispatching rules, Generalisation, Job shop scheduling problems, Job-Shop scheduling, Learn+, Learning strategy, Problem size, Reinforcement learning approach, Curricula, Artificial Intelligence (cs.AI), Machine Learning (cs.LG)
Recommended Citation
Z. Iklassov, D. Medvedev, R. Solozabal, and M. Takac, "Learning to generalize Dispatching rules on the Job Shop Scheduling", 2022, arXiv:2206.04423
Comments
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC by NC-SA 4.0
Uploaded 13 July 2022