Successive Convex Approximation Based Off-Policy Optimization for Constrained Reinforcement Learning

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2022)

引用 1|浏览25
暂无评分
摘要
Constrained reinforcement learning (CRL), also termed as safe reinforcement learning, is a promising technique enabling the deployment of RL agent in real-world systems. In this paper, we propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general CRL problem, which is formulated as a constrained Markov decision process (CMDP) in context of the average cost. The SCAOPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problem with convex surrogate functions. The proposed SCAOPO enables reuse of experiences from previous updates, thereby significantly reducing implementation cost when deployed in real-world engineering systems that need to online learn the environment. In spite of the time-varying state distribution and the stochastic bias incurred by off-policy learning, the SCAOPO with a feasible initial point can still provably converge to a Karush-Kuhn-Tucker (KKT) point of the original problem almost surely.
更多
查看译文
关键词
Convergence, Costs, Signal processing algorithms, Optimization, Approximation algorithms, Reinforcement learning, Markov processes, Constrained, Safe reinforcement learning, off-policy, theoretical convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要