The job shop scheduling problem, a notable NP-hard problem, requires scheduling jobs with multiple operations on specific machines in a predetermined order. A strong assumption is that all the information of the manufacturing environment is known in advance and there is no modification during the scheduling process. However, the realworld environment is significantly affected by uncertainties. The dynamic job shop scheduling is a variant of the job shop scheduling problem in which the scheduling environment is subject to changes over time including variations in job arrival times, processing times, machine breakdowns, resource availability and job priority. To address this issue, this paper presents a single-agent reinforcement learning algorithm, which implements a proximal policy optimization that uses masking to reduce the search space and improve efficiency. The algorithm was tested in both deterministic and dynamic environments and compared to traditional scheduling methods. The results demonstrate that the proposed approach is comparable to traditional methods in deterministic cases and outperforms them in dynamic environments. These findings emphasize the potential of reinforcement learning in addressing and optimizing complex scheduling challenges.
A reinforcement learning algorithm for Dynamic Job Shop Scheduling / Alcamo, Laura; Giovenali, Niccolo; Bruno, Giulia. - ELETTRONICO. - (2024). (Intervento presentato al convegno 5th IFAC/INSTICC International Conference on Innovative Intelligent Industrial Production and Locistics tenutosi a Porto (PORTUGAL) nel 21/11/2021).
A reinforcement learning algorithm for Dynamic Job Shop Scheduling
Alcamo, Laura;Giovenali, Niccolo;Bruno, Giulia
2024
Abstract
The job shop scheduling problem, a notable NP-hard problem, requires scheduling jobs with multiple operations on specific machines in a predetermined order. A strong assumption is that all the information of the manufacturing environment is known in advance and there is no modification during the scheduling process. However, the realworld environment is significantly affected by uncertainties. The dynamic job shop scheduling is a variant of the job shop scheduling problem in which the scheduling environment is subject to changes over time including variations in job arrival times, processing times, machine breakdowns, resource availability and job priority. To address this issue, this paper presents a single-agent reinforcement learning algorithm, which implements a proximal policy optimization that uses masking to reduce the search space and improve efficiency. The algorithm was tested in both deterministic and dynamic environments and compared to traditional scheduling methods. The results demonstrate that the proposed approach is comparable to traditional methods in deterministic cases and outperforms them in dynamic environments. These findings emphasize the potential of reinforcement learning in addressing and optimizing complex scheduling challenges.File | Dimensione | Formato | |
---|---|---|---|
A reinforcement learning algorithm for Dynamic Job Shop Scheduling.pdf
accesso riservato
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
1.68 MB
Formato
Adobe PDF
|
1.68 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2993084