Adam: A Method for Stochastic Optimization

international conference on learning representations, 2015.

Cited by: 69762|Views2746
EI
Weibo:
The method combines the advantages of two recently popular optimization methods: the ability of AdaGrad to deal with sparse gradients, and the ability of RMSProp to deal with non-stationary objectives

Abstract:

We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suit...More

Code:

Data:

Full Text
Bibtex
Weibo
Introduction
  • Stochastic gradient-based optimization is of core practical importance in many fields of science and engineering.
  • Many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions, i.e. stochastic gradient descent (SGD) or ascent.
  • Objectives may have other sources of noise than data subsampling, such as dropout (Hinton et al, 2012b) regularization
  • For all such noisy objectives, efficient stochastic optimization techniques are required.
  • Some of Adam’s advantages are that the magnitudes of parameter updates are invariant to rescaling of the gradient, its stepsizes are approximately bounded by the stepsize hyperparameter, it does not require a stationary objective, it works with sparse gradients, and it naturally performs a form of step size annealing
Highlights
  • Stochastic gradient-based optimization is of core practical importance in many fields of science and engineering
  • Many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions, i.e. stochastic gradient descent (SGD) or ascent
  • We propose Adam, a method for efficient stochastic optimization that only requires first-order gradients with little memory requirement
  • We have introduced a simple and computationally efficient algorithm for gradient-based optimization of stochastic objective functions
  • The method combines the advantages of two recently popular optimization methods: the ability of AdaGrad to deal with sparse gradients, and the ability of RMSProp to deal with non-stationary objectives
  • We found Adam to be robust and well-suited to a wide range of non-convex optimization problems in the field machine learning
Methods
  • To empirically evaluate the proposed method, the authors investigated different popular machine learning models, including logistic regression, multilayer fully connected neural networks and deep convolutional neural networks.
  • Using large models and datasets, the authors demonstrate Adam can efficiently solve practical deep learning problems.
  • The authors use the same parameter initialization when comparing different optimization algorithms.
  • The authors evaluate the proposed method on L2-regularized multi-class logistic regression using the MNIST dataset.
  • The authors compare Adam to accelerated SGD with Nesterov momentum and
Conclusion
  • The authors have introduced a simple and computationally efficient algorithm for gradient-based optimization of stochastic objective functions.
  • The authors' method is aimed towards machine learning problems with large datasets and/or high-dimensional parameter spaces.
  • The method combines the advantages of two recently popular optimization methods: the ability of AdaGrad to deal with sparse gradients, and the ability of RMSProp to deal with non-stationary objectives.
  • The authors found Adam to be robust and well-suited to a wide range of non-convex optimization problems in the field machine learning
Summary
  • Introduction:

    Stochastic gradient-based optimization is of core practical importance in many fields of science and engineering.
  • Many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions, i.e. stochastic gradient descent (SGD) or ascent.
  • Objectives may have other sources of noise than data subsampling, such as dropout (Hinton et al, 2012b) regularization
  • For all such noisy objectives, efficient stochastic optimization techniques are required.
  • Some of Adam’s advantages are that the magnitudes of parameter updates are invariant to rescaling of the gradient, its stepsizes are approximately bounded by the stepsize hyperparameter, it does not require a stationary objective, it works with sparse gradients, and it naturally performs a form of step size annealing
  • Methods:

    To empirically evaluate the proposed method, the authors investigated different popular machine learning models, including logistic regression, multilayer fully connected neural networks and deep convolutional neural networks.
  • Using large models and datasets, the authors demonstrate Adam can efficiently solve practical deep learning problems.
  • The authors use the same parameter initialization when comparing different optimization algorithms.
  • The authors evaluate the proposed method on L2-regularized multi-class logistic regression using the MNIST dataset.
  • The authors compare Adam to accelerated SGD with Nesterov momentum and
  • Conclusion:

    The authors have introduced a simple and computationally efficient algorithm for gradient-based optimization of stochastic objective functions.
  • The authors' method is aimed towards machine learning problems with large datasets and/or high-dimensional parameter spaces.
  • The method combines the advantages of two recently popular optimization methods: the ability of AdaGrad to deal with sparse gradients, and the ability of RMSProp to deal with non-stationary objectives.
  • The authors found Adam to be robust and well-suited to a wide range of non-convex optimization problems in the field machine learning
Related work
  • Optimization methods bearing a direct relation to Adam are RMSProp (Tieleman & Hinton, 2012; Graves, 2013) and AdaGrad (Duchi et al, 2011); these relationships are discussed below. Other stochastic optimization methods include vSGD (Schaul et al, 2012), AdaDelta (Zeiler, 2012) and the natural Newton method from Roux & Fitzgibbon (2010), all setting stepsizes by estimating curvature from first-order information. The Sum-of-Functions Optimizer (SFO) (Sohl-Dickstein et al, 2014) is a quasi-Newton method based on minibatches, but (unlike Adam) has memory requirements linear in the number of minibatch partitions of a dataset, which is often infeasible on memory-constrained systems such as a GPU. Like natural gradient descent (NGD) (Amari, 1998), Adam employs a preconditioner that adapts to the geometry of the data, since vt is an approximation to the diagonal of the Fisher information matrix (Pascanu & Bengio, 2013); however, Adam’s preconditioner (like AdaGrad’s) is more conservative in its adaption than vanilla NGD by preconditioning with the square root of the inverse of the diagonal Fisher information matrix approximation.

    RMSProp: An optimization method closely related to Adam is RMSProp (Tieleman & Hinton, 2012). A version with momentum has sometimes been used (Graves, 2013). There are a few important differences between RMSProp with momentum and Adam: RMSProp with momentum generates its parameter updates using a momentum on the rescaled gradient, whereas Adam updates are directly estimated using a running average of first and second moment of the gradient. RMSProp also lacks a bias-correction term; this matters most in case of a value of β2 close to 1 (required in case of sparse gradients), since in that case not correcting the bias leads to very large stepsizes and often divergence, as we also empirically demonstrate in section 6.4.
Funding
  • Diederik Kingma is supported by the Google European Doctorate Fellowship in Deep Learning
Reference
  • Amari, Shun-Ichi. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
    Google ScholarLocate open access versionFindings
  • Deng, Li, Li, Jinyu, Huang, Jui-Ting, Yao, Kaisheng, Yu, Dong, Seide, Frank, Seltzer, Michael, Zweig, Geoff, He, Xiaodong, Williams, Jason, et al. Recent advances in deep learning for speech research at microsoft. ICASSP 2013, 2013.
    Google ScholarFindings
  • Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011.
    Google ScholarLocate open access versionFindings
  • Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
    Findings
  • Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 6645–6649. IEEE, 2013.
    Google ScholarLocate open access versionFindings
  • Hinton, G.E. and Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science, 313 (5786):504–507, 2006.
    Google ScholarLocate open access versionFindings
  • Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012a. Hinton, Geoffrey E, Srivastava, Nitish, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012b. Kingma, Diederik P and Welling, Max. Auto-Encoding Variational Bayes. In The 2nd International Conference on Learning Representations (ICLR), 2013.
    Findings
  • Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
    Google ScholarLocate open access versionFindings
  • Maas, Andrew L, Daly, Raymond E, Pham, Peter T, Huang, Dan, Ng, Andrew Y, and Potts, Christopher. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 142–150. Association for Computational Linguistics, 2011.
    Google ScholarLocate open access versionFindings
  • Moulines, Eric and Bach, Francis R. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems, pp. 451–459, 2011.
    Google ScholarLocate open access versionFindings
  • Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013.
    Findings
  • Polyak, Boris T and Juditsky, Anatoli B. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992.
    Google ScholarLocate open access versionFindings
  • Published as a conference paper at ICLR 2015 Roux, Nicolas L and Fitzgibbon, Andrew W. A fast natural newton method. In Proceedings of the 27th
    Google ScholarLocate open access versionFindings
  • International Conference on Machine Learning (ICML-10), pp. 623–630, 2010.
    Google ScholarLocate open access versionFindings
  • Ruppert, David. Efficient estimations from a slowly convergent robbins-monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
    Google ScholarFindings
  • Schaul, Tom, Zhang, Sixin, and LeCun, Yann. No more pesky learning rates. arXiv preprint arXiv:1206.1106, 2012. Sohl-Dickstein, Jascha, Poole, Ben, and Ganguli, Surya. Fast large-scale optimization by unifying stochastic gradient and quasi-newton methods. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 604–612, 2014.
    Findings
  • Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 1139–1147, 2013.
    Google ScholarLocate open access versionFindings
  • Tieleman, T. and Hinton, G. Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning. Technical report, 2012.
    Google ScholarFindings
  • Wang, Sida and Manning, Christopher. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 118–126, 2013.
    Google ScholarLocate open access versionFindings
  • Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
    Findings
  • Zinkevich, Martin. Online convex programming and generalized infinitesimal gradient ascent. 2003.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments