A Bayesian Perspective on Generalization and Stochastic Gradient Descent.

international conference on learning representations(2018)

引用 365|浏览140
暂无评分
摘要
We consider two related questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also study how batch size influences test performance, observing an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale $g = epsilon (frac{N}{B} - 1) approx epsilon N/B$, where $epsilon$ is the learning rate, $N$ training set size and $B$ batch size. Consequently the optimum batch size is proportional to the learning rate and the training set size, $B_{opt} propto epsilon N$. We verify these predictions empirically.
更多
查看译文
关键词
bayesian perspective,generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要