Document Type
Article
Publication Title
arXiv
Abstract
Model-based methods have recently shown promising for offline reinforcement learning (RL), aiming to learn good policies from historical data without interacting with the environment. Previous model-based offline RL methods learn fully connected nets as world-models to map the states and actions to the next-step states. However, it is sensible that a world-model should adhere to the underlying causal effect such that it will support learning an effective policy generalizing well in unseen states. In this paper, We first provide theoretical results that causal world-models can outperform plain world-models for offline RL by incorporating the causal structure into the generalization error bound. We then propose a practical algorithm, oFfline mOdel-based reinforcement learning with CaUsal Structure (FOCUS), to illustrate the feasibility of learning and leveraging causal structure in offline RL. Experimental results on two benchmarks show that FOCUS reconstructs the underlying causal structure accurately and robustly. Consequently, it performs better than the plain model-based offline RL algorithms and other causal model-based RL algorithms. © 2022, CC BY.
DOI
10.48550/arXiv.2206.01474
Publication Date
6-3-2022
Keywords
Learning systems, Historical data, Learn+, Model-based method, Model-based OPC, Model-based reinforcement learning, Offline, Reinforcement learning algorithms, Reinforcement learning method, Reinforcement learnings, World model, Reinforcement learning, Machine Learning (cs.LG), Machine Learning (stat.ML)
Recommended Citation
Z.M. Zhu, X.H. Chen, H.L. Tian, K. Zhang and Y. Yu, "Offline Reinforcement Learning with Causal Structured World Models", 2022, arXiv:2206.01474
Comments
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC by 4.0
Uploaded 14 July 2022