Rethinking Generalisation

arxiv(2019)

引用 0|浏览1
暂无评分
摘要
In this paper, we present a new approach to computing the generalisation performance assuming that the distribution of risks, $\rho(r)$, for a learning scenario is known. This allows us to compute the expected error of a learning machine using empirical risk minimisation. We show that it is possible to obtain results for both classification and regression. We show a critical quantity in determining the generalisation performance is the power-law behaviour of $\rho(r)$ around its minimum value. We compute $\rho(r)$ for the case of all Boolean functions and for the perceptron. We start with a simplistic analysis but then do a more formal one later on. We show that the simplistic results are qualitatively correct and provide a good approximation to the actual results if we replace the true training set size with an approximate training set size.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要