Distribution-Dependent Weighted Union Bound

ENTROPY(2021)

引用 0|浏览3
暂无评分
摘要
In this paper, we deal with the classical Statistical Learning Theory's problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that P{L((R) over cap (h), delta(qh)) <= R(h) <= U((R) over cap (h), delta p(h))} >= 1 - delta where (R) over cap is the empirical errors, if it is possible to prove that P{R(h) >= L((R) over cap,delta)} >= 1 - delta and P{R(h)<= U((R) over cap, delta)} >= 1 - delta, when h, qh, and ph are chosen before seeing the data such that qh,ph is an element of[0,1] and n-ary sumation h is an element of H(qh+ph)=1. If no a priori information is available qh and ph are set to 12m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB.
更多
查看译文
关键词
union bound, weighted union bound, distribution-dependent weights, statistical learning theory, finite number of hypothesis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要