Document Type

Article

Publication Title

IEEE Access

Abstract

We present a novel end-to-end framework for solving the Vehicle Routing Problem with stochastic demands (VRPSD) using Reinforcement Learning (RL). Our formulation incorporates the correlation between stochastic demands through other observable stochastic variables, thereby offering an experimental demonstration of the theoretical premise that non-i.i.d. stochastic demands provide opportunities for improved routing solutions. Our approach bridges the gap in the application of RL to VRPSD and consists of a parameterized stochastic policy optimized using a policy gradient algorithm to generate a sequence of actions that form the solution. Our model outperforms previous state-of-the-art metaheuristics and demonstrates robustness to changes in the environment, such as the supply type, vehicle capacity, correlation, and noise levels of demand. Moreover, the model can be easily retrained for different VRPSD scenarios by observing the reward signals and following feasibility constraints, making it highly flexible and scalable. These findings highlight the potential of RL to enhance the transportation efficiency and mitigate its environmental impact in stochastic routing problems. Our implementation is available in https://github.com/Zangir/SVRP.

First Page

87958

Last Page

87969

DOI

10.1109/ACCESS.2023.3306076

Publication Date

8-17-2023

Keywords

Reinforcement learning, stopchastic optimization, vehicle routing problem

Comments

Open Access

Archived thanks to IEEE Access

License: CC BY NC-ND 4.0

Uploaded 17 January 2024

Share

COinS