Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
IEEE Journal on Selected Areas in Information Theory(2022)
摘要
Generalization error bounds are essential for comprehending how well machine
learning models work. In this work, we suggest a novel method, i.e., the
Auxiliary Distribution Method, that leads to new upper bounds on expected
generalization errors that are appropriate for supervised learning scenarios.
We show that our general upper bounds can be specialized under some conditions
to new bounds involving the α-Jensen-Shannon, α-Rényi (0<
α < 1) information between a random variable modeling the set of training
samples and another random variable modeling the set of hypotheses. Our upper
bounds based on α-Jensen-Shannon information are also finite.
Additionally, we demonstrate how our auxiliary distribution method can be used
to derive the upper bounds on excess risk of some learning algorithms in the
supervised learning context and the generalization error under the
distribution mismatch scenario in supervised learning algorithms, where the
distribution mismatch is modeled as α-Jensen-Shannon or α-Rényi
divergence between the distribution of test and training data samples
distributions. We also outline the conditions for which our proposed upper
bounds might be tighter than other earlier upper bounds.
更多查看译文
关键词
Expected Generalization Error Bounds,population risk upper bound,Mutual Information,α-Jensen-Shannon Information,α-Rényi Information,Distribution mismatch
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要