An Efficient Evaluation Mechanism for Evolutionary Reinforcement Learning

INTELLIGENT COMPUTING THEORIES AND APPLICATION (ICIC 2022), PT I(2022)

引用 0|浏览12
暂无评分
摘要
In recent years, many algorithms use Evolutionary Algorithms (EAs) to help Reinforcement Learning (RL) jump out of local optima. Evolutionary Reinforcement Learning (ERL) is a popular algorithm in this field. However, ERL evaluate the population in each loop of the algorithm, which is inefficient because of the uncertainties of the population's experience. In this paper, we propose a novel evaluation mechanism, which only evaluates the population when the RL agent has difficulty in studying further. This mechanism can improve the efficiency of the hybrid algorithms in most cases, and even in the worst scenario, it only reduces the performance marginally. We embed this mechanism into ERL, denoted as E-ERL, and compare it with original ERL and other state-of-the-art RL algorithms. Results on six continuous control problems validate the efficiency of our method.
更多
查看译文
关键词
Evolutionary algorithm, Reinforcement learning, Evolutionary reinforcement learning, Evaluation mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要