Bias Reduction Through Conditional Conformal Prediction
Intelligent Data Analysis(2015)
摘要
Conformal prediction (CP) is a relatively new framework in which predictive models output sets of predictions with a bound on the error rate, i.e., the probability of making an erroneous prediction is guaranteed to be equal to or less than a predefined significance level. Label-conditional conformal prediction (LCCP) is a specialization of the framework which gives a bound on the error rate for each individual class. For datasets with class imbalance, many learning algorithms have a tendency to predict the majority class more often than the expected relative frequency, i.e., they are biased in favor of the majority class. In this study, the class bias of standard and label-conditional conformal predictors is investigated. An empirical investigation on 32 publicly available datasets with varying degrees of class imbalance is presented. The experimental results show that CP is highly biased towards the majority class on imbalanced datasets, i.e., it can be expected to make a majority of its errors on the minority class. LCCP, on the other hand, is not biased towards the majority class. Instead, the errors are distributed between the classes almost in accordance with the prior class distribution.
更多查看译文
关键词
Conformal prediction,imbalanced learning,class bias
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络