Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets.

Pattern Recognition(2017)

引用 60|浏览10
暂无评分
摘要
A popular procedure for identifying which one of two classification algorithms has a better performance is to test them on multiple data sets, and the accuracies resulting from k-fold cross validation are aggregated to draw a conclusion. Several nonparametric methods have been proposed for this purpose, while parametric methods will be a better choice to determine the superior algorithm when the assumptions for deriving sampling distributions can be satisfied. In this paper, we consider every accuracy estimate resulting from the instances in a fold or a data set as a point estimator instead of a fixed value to derive the sampling distribution of the point estimator for comparing the performance of two classification algorithms. The test statistics for both data-set and fold averaging levels are proposed, and the ways to calculate their degrees of freedom are also presented. Twelve data sets are chosen to demonstrate that our parametric methods can be used to effectively compare the performance of two classification algorithms on multiple data sets. Several critical issues in using our parametric methods and the nonparametric ones proposed in a previous study are then discussed. Parametric methods for performance comparison of two classification algorithms are proposed.Both independent and matched samples are considered.The experimental results on twelve data sets demonstrate the effectiveness of our parametric methods.
更多
查看译文
关键词
Classification,k-fold cross validation,Parametric method,Sampling distribution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要