A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors

Nikhil Ghosh,Mikhail Belkin

SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE(2023)

引用 0|浏览9
暂无评分
摘要
In this work we establish an algorithm and distribution independent nonasymptotic trade-off between the model size, excess test loss, and training loss of linear predictors. Specifically, we show that models that perform well on the test data (have low excess loss) are either "classical"-have training loss close to the noise level-or are "modern"-have a much larger number of parameters compared to the minimum needed to fit the training data exactly. We also provide a more precise asymptotic analysis when the limiting spectral distribution of the whitened features is Marchenko-Pastur. Remarkably, while the Marchenko-Pastur analysis is far more precise near the interpolation peak, where the number of parameters is just enough to fit the training data, it coincides exactly with the distribution independent bound as the level of overparameterization increases.
更多
查看译文
关键词
statistical learning theory,overfitting,linear regression,overparametrization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要