Sharper Bounds for Proximal Gradient Algorithms with Errors
SIAM Journal on Optimization(2024)
Heriot Watt Univ
Abstract
We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Feje'\r case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal -gradient methods for convex optimization , in Advances in Neural Information Processing Systems, 2011, pp. 1458-1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.
MoreTranslated text
Key words
convex optimization,proximal gradient descent,approximate algorithms
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined