How The Duration Of The Learning Period Affects The Performance Of Random Gradient Selection Hyper-Heuristics

THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE(2020)

引用 29|浏览22
暂无评分
摘要
Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LEADINGONES in the best expected time achievable with the available heuristics, if sufficiently long learning periods tau are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: RIDGE, where the HH has to learn that RLS1 is always the best low-level heuristic, and ONEMAX, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods tau are required for the HH to achieve optimal expected runtime for RIDGE. Conversely, a sub-logarithmic learning period is the best static choice for ONEMAX, while using super-linear values for tau increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both ONEMAX and RIDGE. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要