Batch size invariant Adam
CoRR(2024)
摘要
We propose a batch size invariant version of Adam, for use in large-scale,
distributed settings, in which the mini-batch is divided into micro-batches
which are distributed among worker nodes. For the v term, standard Adam first
computes the average over micro-batch gradients, then squares, while in the
batch size invariant Adam proposed here, we first square the micro-batch
gradients, then average. Previous work (e.g. Malladi et al. 2022) used an
alternative approach that involved a square-root scaling of the learning rate,
but this approach requires strong assumptions to work; in particular that the
gradient variance dominates the square of the expected gradient. In contrast,
the approach proposed here gives batch size invariance without this assumption.
We confirm that in practice our scheme gives batch size invariance in a much
larger range of scenarios than the previous approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要