Estimating Random-X Prediction Error of Regression Models

arXiv: Methodology(2017)

引用 23|浏览102
暂无评分
摘要
The areas of model selection and model evaluation for predictive modeling have received extensive treatment in the statistics literature, leading to both theoretical advances and practical methods based on covariance penalties and other approaches. However, the majority of this work, and especially the practical approaches, are based on the Fixed-X assumption, where covariate values are assumed to be non-random and known. By contrast, in most modern predictive modeling applications, it is more reasonable to take the view, where future prediction points are random and new. In this paper we concentrate on examining the applicability of the covariance-penalty approaches to this problem. We propose a decomposition of the Random-X prediction error that clarifies the additional error due to Random-X, which is present in both the variance and bias components of the error. This decomposition is general, but we focus on its application to the fundamental case of least squares regression. We show how to quantify the excess variance under some assumptions using standard random-matrix results, leading to a covariance penalty approach we term $RCp$. When the variance of the error is unknown, using the standard unbiased estimate leads to an approach we term $hat{RCp}$, which is closely related to existing methods MSEP and GCV. To account for excess bias, we propose to take only the bias component of the ordinary cross validation (OCV) estimate, resulting in a hybrid penalty we term $RCp^+$. We demonstrate by theoretical analysis and simulations that this approach is consistently superior to OCV, although the difference is typically small.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要