Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

NIPS 2020, 2020.

Cited by: 3|Views22
EI
Weibo:
We start with the natural question if there is some distribution independent implicit regularization that operates on Stochastic Gradient Descent

Abstract:

The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algorithm towards a certain structured solution that often generalizes well. Recently, severa...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments