On the Detection of Mixture Distributions with applications to the Most Biased Coin Problem.

arXiv: Learning(2016)

引用 23|浏览42
暂无评分
摘要
This paper studies the trade-off between two different kinds of pure exploration: breadth versus depth. The most biased coin problem asks how many total coin flips are required to identify a coin from an infinite bag containing both coins with mean $theta_1 (0,1)$, and light coins with mean $theta_0 (0,theta_1)$, where heavy coins are drawn from the bag with probability $alpha (0,1/2)$. The key difficulty of this problem lies in distinguishing whether the two kinds of coins have very similar means, or whether heavy coins are just extremely rare. This problem has applications in crowdsourcing, anomaly detection, and radio spectrum search. Chandrasekaran et. al. (2014) recently introduced a solution to this problem but it required perfect knowledge of $theta_0,theta_1,alpha$. In contrast, we derive algorithms that are adaptive to partial or absent knowledge of the problem parameters. Moreover, our techniques generalize beyond coins to more general instances of infinitely many armed bandit problems. We also prove lower bounds that show our algorithmu0027s upper bounds are tight up to $log$ factors, and on the way characterize the sample complexity of differentiating between a single parametric distribution and a mixture of two such distributions. As a result, these bounds have surprising implications both for solutions to the most biased coin problem and for anomaly detection when only partial information about the parameters is known.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要