‘A running performance metric and termination criterion for evaluating evolutionary multi-and many-objective optimization algorithms

CEC(2020)

引用 28|浏览126
暂无评分
摘要
Researchers have spent a considerable effort in evaluating the goodness of a solution set obtained by an evolutionary multi-objective algorithm. However, most performance metrics assume that the knowledge of the exact Pareto-optimal set is available. Also, most metrics evaluate an algorithm’s performance based on the final solution set, which fails to capture their performance during intermediate generations. In this paper, we investigate a running performance metric which can be applied to measure the performance at any time during the algorithm execution and no true optimum needs to be known for computing the metric. In general, multi-objective algorithms either improve the convergence based on the dominance relation or the diversity in the solution set. Our proposed running metric makes use of this fact by keeping track of the indicators regarding the extreme points and the ND solution set each generation and derives measures of convergence and diversity. Moreover, by introducing a threshold and comparing the values of indicators a set of termination criteria is also suggested. Finally, we demonstrate how our running performance metric can be used to compare multiple evolutionary multi-objective algorithms with each other. An implementation of the proposed methodology is available at pymoo, a multi-objective optimization framework: https://pymoo. org.
更多
查看译文
关键词
Multi-objective Optimization,Performance Indicator,Running Metric
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要