SDAC: Efficient Safe Reinforcement Learning with Low-Biased Distributional Actor-Critic

ICLR 2023(2023)

引用 0|浏览9
暂无评分
摘要
To apply reinforcement learning (RL) to real-world practical applications, agents are required to adhere to the safety guidelines of their respective domains. Safe RL can effectively handle the guidelines by maximizing returns while maintaining safety satisfaction. In this paper, we develop a safe distributional RL method based on the trust region method which has the capability of satisfying safety constraints consistently. However, importance sampling required for the trust region method can hinder performance due to its significant variance, and policies may not meet the safety guidelines due to the estimation bias of distributional critics. Hence, we enhance safety performance through the following approaches. First, we propose novel surrogates for the trust region method expressed with Q-functions using the reparameterization trick. Second, we utilize distributional critics trained with a target distribution where bias-variance can be traded off. In addition, if an initial policy violates safety constraints, there can be no policy satisfying safety constraints within the trust region. Thus, we propose a gradient integration method which is guaranteed to find a policy satisfying multiple constraints from an unsafe initial policy. From extensive experiments, the proposed method shows minimal constraint violations while achieving high returns compared to existing safe RL methods. Furthermore, we demonstrate the benefit of safe RL for problems in which the reward function cannot be easily specified.
更多
查看译文
关键词
Reinforcement learning,Safety,Distributional Critic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要