Uniform Asymptotic Inference And The Bootstrap After Model Selection

ANNALS OF STATISTICS(2018)

引用 105|浏览33
暂无评分
摘要
Recently, Tibshirani et al. [J. Amer. Statist. Assoc. 111 (2016) 600-620] proposed a method for making inferences about parameters defined by model selection, in a typical regression setting with normally distributed errors. Here, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of Tibshirani et al. (2016) is asymptotically valid, as the number of samples n grows and the dimension d of the regression problem stays fixed. Our asymptotic result holds uniformly over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter intervals than those from the original normality-based approach. Finally, we prove that the test statistic of Tibshirani et al. (2016) does not enjoy uniform validity in a high-dimensional setting, when the dimension d is allowed grow.
更多
查看译文
关键词
Post-selection inference,selective inference,asymptotics,bootstrap,forward,stepwise regression,lasso
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要