Multicriteria Classifier Ensemble Learning for Imbalanced Data

IEEE ACCESS(2022)

引用 7|浏览5
暂无评分
摘要
One of the vital problems with the imbalanced data classifier training is the definition of an optimization criterion. Typically, since the exact cost of misclassification of the individual classes is unknown, combined metrics and loss functions that roughly balance the cost for each class are used. However, this approach can lead to a loss of information, since different trade-offs between class misclassification rates can produce similar combined metric values. To address this issue, this paper discusses a multi-criteria ensemble training method for the imbalanced data. The proposed method jointly optimizes precision and recall, and provides the end-user with a set of Pareto optimal solutions, from which the final one can be chosen according to the user's preference. The proposed approach was evaluated on a number of benchmark datasets and compared with the single-criterion approach (where the selected criterion was one of the chosen metrics). The results of the experiments confirmed the usefulness of the obtained method, which on the one hand guarantees good quality, i.e., not worse than the one obtained with the use of single-criterion optimization, and on the other hand, offers the user the opportunity to choose the solution that best meets their expectations regarding the trade-off between errors on the minority and the majority class.
更多
查看译文
关键词
Measurement, Optimization, Costs, Task analysis, Bagging, Training, Licenses, Classifier ensemble, imbalanced data, multi-objective optimization, pattern classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要