On the use of information criteria for subset selection in least squares regression

arxiv(2019)

引用 0|浏览3
暂无评分
摘要
Least squares (LS) based subset selection methods are popular in linear regression modeling when the number of predictors is less than the number of observations. Best subset selection (BS) is known to be NP hard and has a computational cost that grows exponentially with the number of predictors. Forward stepwise selection (FS) is a greedy heuristic for BS. Both methods rely on cross-validation (CV) in order to select the subset size $k$, which requires fitting the procedures multiple times and results in a selected $k$ that is random across replications. Compared to CV, information criteria only require fitting the procedures once, and we show that for LS-based methods they can result in better predictive performance while providing a non-random choice of $k$. However, information criteria require knowledge of the effective degrees of freedom for the fitting procedure, which is generally not available analytically for complex methods. In this paper, we propose a novel LS-based method, the best orthogonalized subset selection (BOSS) method, which performs BS upon an orthogonalized basis of ordered predictors. Assuming orthogonal predictors, we build a connection between BS and its Lagrangian formulation (i.e., minimization of the residual sum of squares plus the product of a regularization parameter and $k$), and based on this connection introduce a heuristic degrees of freedom (hdf) for BOSS that can be estimated via an analytically-based expression. We show in both simulations and real data analysis that BOSS using the Kullback-Leibler based information criterion AICc-hdf has the strongest performance of all of the LS-based methods considered and is competitive with regularization methods, with the computational effort of a single ordinary LS fit.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要