Adaptive Methods for Nonconvex Optimization
NeurIPS, pp. 9815-9825, 2018.
deep learningnonconvex optimizationconstant factorconvex optimizationsquare root
Adaptive gradient methods that rely on scaling gradients down by the square root of exponential moving averages of past squared gradients, such RMSProp, Adam, Adadelta have found wide application in optimizing the nonconvex problems that arise in deep learning. However, it has been recently demonstrated that such methods can fail to conve...More