Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study
NIPS 2020, 2020.
We start with the natural question if there is some distribution independent implicit regularization that operates on Stochastic Gradient Descent
The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms. This notion refers to the tendency of the optimization algorithm towards a certain structured solution that often generalizes well. Recently, severa...More
PPT (Upload PPT)