On the training dynamics of deep networks with L2 regularization

NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems(2020)

引用 0|浏览6
暂无评分
摘要
We study the role of L 2 regularization in deep learning, and uncover simple relations between the performance of the model, the L 2 coefficient, the learning rate, and the number of training steps. These empirical relations hold when the network is overparameterized. They can be used to predict the optimal regularization parameter of a given model. In addition, based on these observations we propose a dynamical schedule for the regularization parameter that improves performance and speeds up training. We test these proposals in modern image classification settings. Finally, we show that these empirical relations can be understood theoretically in the context of infinitely wide networks. We derive the gradient flow dynamics of such networks, and compare the role of L 2 regularization in this context with that of linear models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要