High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise
arxiv(2023)
摘要
We study high-probability convergence guarantees of learning on streaming
data in the presence of heavy-tailed noise. In the proposed scenario, the model
is updated in an online fashion, as new information is observed, without
storing any additional data. To combat the heavy-tailed noise, we consider a
general framework of nonlinear stochastic gradient descent (SGD), providing
several strong results. First, for non-convex costs and component-wise
nonlinearities, we establish a convergence rate arbitrarily close to
𝒪(t^-1/4), whose exponent is independent of
noise and problem parameters. Second, for strongly convex costs and a broader
class of nonlinearities, we establish convergence of the last iterate to the
optimum, with a rate 𝒪(t^-ζ), where ζ∈
(0,1) depends on problem parameters, noise and nonlinearity. As we show
analytically and numerically, ζ can be used to inform the preferred
choice of nonlinearity for given problem settings. Compared to
state-of-the-art, who only consider clipping, require bounded noise moments of
order η∈ (1,2], and establish convergence rates whose exponents go to
zero as η→ 1, we provide high-probability guarantees for a much
broader class of nonlinearities and symmetric density noise, with convergence
rates whose exponents are bounded away from zero, even when the noise has
finite first moment only. Moreover, in the case of strongly convex functions,
we demonstrate analytically and numerically that clipping is not always the
optimal nonlinearity, further underlining the value of our general framework.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要