Stepdown SLOPE for Controlled Feature Selection

arxiv(2023)

引用 0|浏览14
暂无评分
摘要
Sorted L-One Penalized Estimation (SLOPE) has shown the nice theoretical property as well as empirical behavior recently on the false discovery rate (FDR) control of high-dimensional feature selection by adaptively imposing the non-increasing sequence of tuning parameters on the sorted $\ell_1$ penalties. This paper goes beyond the previous concern limited to the FDR control by considering the stepdown-based SLOPE to control the probability of $k$ or more false rejections ($k$-FWER) and the false discovery proportion (FDP). Two new SLOPEs, called $k$-SLOPE and F-SLOPE, are proposed to realize $k$-FWER and FDP control respectively, where the stepdown procedure is injected into the SLOPE scheme. For the proposed stepdown SLOPEs, we establish their theoretical guarantees on controlling $k$-FWER and FDP under the orthogonal design setting, and also provide an intuitive guideline for the choice of regularization parameter sequence in much general setting. Empirical evaluations on simulated data validate the effectiveness of our approaches on controlled feature selection and support our theoretical findings.
更多
查看译文
关键词
stepdown slope,feature selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要