Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
arxiv(2024)
摘要
Adaptive methods are extremely popular in machine learning as they make
learning rate tuning less expensive. This paper introduces a novel optimization
algorithm named KATE, which presents a scale-invariant adaptation of the
well-known AdaGrad algorithm. We prove the scale-invariance of KATE for the
case of Generalized Linear Models. Moreover, for general smooth non-convex
problems, we establish a convergence rate of O (log T/√(T)) for KATE, matching the best-known ones for AdaGrad and Adam. We also
compare KATE to other state-of-the-art adaptive algorithms Adam and AdaGrad in
numerical experiments with different problems, including complex machine
learning tasks like image classification and text classification on real data.
The results indicate that KATE consistently outperforms AdaGrad and
matches/surpasses the performance of Adam in all considered scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要