AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size
CoRR(2024)
摘要
This paper presents a novel adaptation of the Stochastic Gradient Descent
(SGD), termed AdaBatchGrad. This modification seamlessly integrates an adaptive
step size with an adjustable batch size. An increase in batch size and a
decrease in step size are well-known techniques to tighten the area of
convergence of SGD and decrease its variance. A range of studies by R. Byrd and
J. Nocedal introduced various testing techniques to assess the quality of
mini-batch gradient approximations and choose the appropriate batch sizes at
every step. Methods that utilized exact tests were observed to converge within
O(LR^2/ε) iterations. Conversely, inexact test implementations
sometimes resulted in non-convergence and erratic performance. To address these
challenges, AdaBatchGrad incorporates both adaptive batch and step sizes,
enhancing the method's robustness and stability. For exact tests, our approach
converges in O(LR^2/ε) iterations, analogous to standard gradient
descent. For inexact tests, it achieves convergence in O(max{
LR^2/ε, σ^2 R^2/ε^2 } ) iterations. This makes
AdaBatchGrad markedly more robust and computationally efficient relative to
prevailing methods. To substantiate the efficacy of our method, we
experimentally show, how the introduction of adaptive step size and adaptive
batch size gradually improves the performance of regular SGD. The results imply
that AdaBatchGrad surpasses alternative methods, especially when applied to
inexact tests.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要