Opinion Guided Reinforcement Learning

Aug 13, 2024·
Kyanna Dagenais
Kyanna Dagenais
,
Istvan David
· 1 min read
Type
Publication
Opinion Guided Reinforcement Learning

Abstract
Human guidance is often desired in reinforcement learning to improve the performance of the learning agent. However, human insights are often mere opinions and educated guesses rather than well-formulated arguments. While opinions are subject to uncertainty, e.g., due to partial informedness or ignorance about a problem, they also emerge earlier than hard evidence can be produced. Thus, guiding reinforcement learning agents by way of opinions offers the potential for more performant learning processes, but comes with the challenge of modeling and managing opinions in a formal way. In this article, we present a method to guide reinforcement learning agents through opinions. To this end, we provide an end-to-end method to model and manage advisors’ opinions. To assess the utility of the approach, we evaluate it with synthetic (oracle) and human advisors, at different levels of uncertainty, and under multiple advice strategies. Our results indicate that opinions, even if uncertain, improve the performance of reinforcement learning agents, resulting in higher rewards, more efficient exploration, and a better reinforced policy.