Selection rules based on divergences

STATISTICS(2011)

引用 2|浏览2
暂无评分
摘要
This paper deals with a special adaptive estimation problem, namely how can one select for each set of i.i.d. data X-1 ,..., X-n the better of two given estimates of the data-generating probability density. Such a problem was studied by Devroye and Lugosi [Combinatorial Methods in Density Estimation, Springer, Berlin, 2001] who proposed a feasible suboptimal selection (called the Scheffe selection) as an alternative to the optimal but nonfeasible selection which minimizes the L-1-error. In many typical situations, the L-1-error of the Scheffe selection was shown to tend to zero for n -> infinity as fast as the L-1-error of the optimal estimate. This asymptotic result was based on an inequality between the total variation errors of the Scheffe and optimal selections. The present paper extends this inequality to the class of phi-divergence errors containing the L-1-error as a special case. The first extension compares the phi-divergence errors of the mentioned Scheffe and optimal selections of Devroye and Lugosi. The second extension deals with a class of generalized Scheffe selections adapted to the phi-divergence error criteria and reducing to the classical Scheffe selection for the L-1-criterion. It compares the phi-divergence errors of these feasible selections and the optimal nonfeasible selections minimizing the phi-divergence errors. Both extensions are motivated and illustrated by examples.
更多
查看译文
关键词
nonparametric estimation,divergence error criteria,optimal and suboptimal selections,Scheffe selection,divergence selections
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要