Regularization of the Policy Updates for Stabilizing Mean Field Games
Document Type
Conference Proceeding
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
This work studies non-cooperative Multi-Agent Reinforcement Learning (MARL) where multiple agents interact in the same environment and whose goal is to maximize the individual returns. Challenges arise when scaling up the number of agents due to the resultant non-stationarity that the many agents introduce. In order to address this issue, Mean Field Games (MFG) rely on the symmetry and homogeneity assumptions to approximate games with very large populations. Recently, deep Reinforcement Learning has been used to scale MFG to games with larger number of states. Current methods rely on smoothing techniques such as averaging the q-values or the updates on the mean-field distribution. This work presents a different approach to stabilize the learning based on proximal updates on the mean-field policy. We name our algorithm Mean Field Proximal Policy Optimization (MF-PPO), and we empirically show the effectiveness of our method in the OpenSpiel framework.
First Page
361
Last Page
372
DOI
10.1007/978-3-031-33377-4_28
Publication Date
5-28-2023
Keywords
mean-field games, proximal policy optimization, Reinforcement learning
Recommended Citation
T. Algumaei, R. Solozabal, R. Alami, H. Hacid, M. Debbah, and M. Takáč, "Regularization of the Policy Updates for Stabilizing Mean Field Games", In Advances in Knowledge Discovery and Data Mining (PAKDD 2023), Lecture Notes in Computer Science, vol 13936, May 2023. doi:10.1007/978-3-031-33377-4_28
Comments
IR conditions: non-described