Adaptive Moment Estimation (Adam)

Published on: July 18, 2021

Adaptive Moment Estimation (Adam)

Table of Content

Adaptive Moment Estimation better known as Adam is another adaptive learning rate method first published in 2014 by Kingma et. al. [1] In addition to storing an exponentially decaying average of past squared gradients like Adadelta or RMSprop, Adam also keeps an exponentially decaying average of past gradients , similar to SGD with momentum. [2]

is an estimate of the first moment (the mean) and is the estimate of the second moment (the uncentered variance) of the gradients respectively. As and are initialized as vectors of 0's, the authors of Adam observe that they are biased towards zero, especially during the initial time steps, and especially when the decay rates are small (i.e. and are close to 1). [2]

To counteract the biases by calculating bias-corrected first and second moment esimates:

and are then used to update the parameters as follows:

As default values for and the authors propose for and for .

[1] Diederik P. Kingma and Jimmy Ba (2014). Adam: A Method for Stochastic Optimization.

[2] Sebastian Ruder (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.

Code

Resources

More stories

  • Kernel PCA

  • Principal Component Analysis (PCA)

  • Linear Discriminant Analysis (LDA)