Advances in Preference-based Reinforcement Learning: A Review
Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
Reinforcement Learning (RL) algorithms suffer from the dependency on accurately engineered reward functions to properly guide the learning agents to do the required tasks. Preference-based reinforcement learning (PbRL) addresses that by utilizing human preferences as feedback from the experts instead of numeric rewards. Due to its promising advantage over traditional RL, PbRL has gained more focus in recent years with many significant advances. In this survey, we present a unified PbRL framework to include the newly emerging approaches that improve the scalability and efficiency of PbRL. In addition, we give a detailed overview of the theoretical guarantees and benchmarking work done in the field, while presenting its recent applications in complex real-world tasks. Lastly, we go over the limitations of the current approaches and the proposed future research directions. © 2022 IEEE.
Benchmarking, Intelligent agents
Y. Abdelkareem, S. Shehata and F. Karray, "Advances in Preference-based Reinforcement Learning: A Review," in 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2022, pp. 2527-2532, doi: 10.1109/SMC53654.2022.9945333.
IR conditions: non-described