Rényi Divergence Based Bounds on Generalization Error

2021 IEEE Information Theory Workshop (ITW)(2021)

引用 1|浏览1
暂无评分
摘要
Generalization error captures the degree to which the output of a learning algorithm overfits the training data. We obtain a family of bounds which generalize the bounds developed by Xu & Raginsky (2017) and Bu, Zou and Veeravalli (2019), under certain assumptions. Our bounds are based on the Rényi analogue of the Donsker-Varadhan representation of Kullback-Leibler divergence. We also obtain b...
更多
查看译文
关键词
Upper bound,Conferences,Training data,Mutual information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要