Learning to select the recombination operator for derivative-free optimization

Science China Mathematics(2024)

引用 0|浏览0
暂无评分
摘要
Extensive studies on selecting recombination operators adaptively, namely, adaptive operator selection (AOS), during the search process of an evolutionary algorithm (EA), have shown that AOS is promising for improving EA’s performance. A variety of heuristic mechanisms for AOS have been proposed in recent decades, which usually contain two main components: the feature extraction and the policy setting. The feature extraction refers to as extracting relevant features from the information collected during the search process. The policy setting means to set a strategy (or policy) on how to select an operator from a pool of operators based on the extracted feature. Both components are designed by hand in existing studies, which may not be efficient for adapting optimization problems. In this paper, a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs, namely, differential evolution (DE). In the framework, the feature extraction is parameterized as a deep neural network (DNN), while a Dirichlet distribution is considered to be the policy. A reinforcement learning method, named policy gradient, is used to train the DNN. As case studies, the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE, which result in two new algorithms named PG-DE and PG-MPEDE, respectively. Experiments on the Congress of Evolutionary Computation (CEC) 2018 test suite show that the proposed new algorithms perform significantly better than their counterparts. Finally, we prove theoretically that the considered classic methods are the special cases of the proposed framework.
更多
查看译文
关键词
evolutionary algorithm,differential evolution,adaptive operator selection,reinforcement learning,deep learning,68T05,68W01,90C40
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要