The power of online thinning in reducing discrepancy

Probability Theory and Related Fields(2018)

引用 20|浏览84
暂无评分
摘要
Consider an infinite sequence of independent, uniformly chosen points from [0,1]^d . After looking at each point in the sequence, an overseer is allowed to either keep it or reject it, and this choice may depend on the locations of all previously kept points. However, the overseer must keep at least one of every two consecutive points. We call a sequence generated in this fashion a two-thinning sequence. Here, the purpose of the overseer is to control the discrepancy of the empirical distribution of points, that is, after selecting n points, to reduce the maximal deviation of the number of points inside any axis-parallel hyper-rectangle of volume A from nA . Our main result is an explicit low complexity two-thinning strategy which guarantees discrepancy of O(log ^2d+1 n) for all n with high probability [compare with Θ (√(nloglog n)) without thinning]. The case d=1 of this result answers a question of Benjamini. We also extend the construction to achieve the same asymptotic bound for ( 1+β )-thinning, a set-up in which rejecting is only allowed with probability β independently for each point. In addition, we suggest an improved and simplified strategy which we conjecture to guarantee discrepancy of O(log ^d+1 n) [compare with θ (log ^d n) , the best known construction of a low discrepancy sequence]. Finally, we provide theoretical and empirical evidence for our conjecture, and provide simulations supporting the viability of our construction for applications.
更多
查看译文
关键词
Two-choices,Thinning,Discrepancy,Subsampling,Online,Haar
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要