Monitored Markov Decision Processes
CoRR(2024)
摘要
In reinforcement learning (RL), an agent learns to perform a task by
interacting with an environment and receiving feedback (a numerical reward) for
its actions. However, the assumption that rewards are always observable is
often not applicable in real-world problems. For example, the agent may need to
ask a human to supervise its actions or activate a monitoring system to receive
feedback. There may even be a period of time before rewards become observable,
or a period of time after which rewards are no longer given. In other words,
there are cases where the environment generates rewards in response to the
agent's actions but the agent cannot observe them. In this paper, we formalize
a novel but general RL framework - Monitored MDPs - where the agent cannot
always observe rewards. We discuss the theoretical and practical consequences
of this setting, show challenges raised even in toy environments, and propose
algorithms to begin to tackle this novel setting. This paper introduces a
powerful new formalism that encompasses both new and existing problems and lays
the foundation for future research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要