Newton's Method in Mixed Precision

SIAM REVIEW(2022)

引用 6|浏览13
暂无评分
摘要
We investigate the use of reduced precision arithmetic to solve the linear equation for the Newton step. If one neglects the backward error in the linear solve, then well-known convergence theory implies that using single precision in the linear solve has very little negative effect on the nonlinear convergence rate. However, if one considers the effects of backward error, then the usual textbook estimates are very pessimistic and even the state-of-the-art estimates using probabilistic rounding analysis do not fully conform to experiments. We report on experiments with a specific example. We store and factor Jacobians in double, single, and half precision. In the single precision case we observe that the convergence rates for the nonlinear iteration do not degrade as the dimension increases and that the nonlinear iteration statistics are essentially identical to the double precision computation. In half precision we see that the nonlinear convergence rates, while poor, do not degrade as the dimension increases. Audience. This paper is intended for students who have completed or are taking an entry-level graduate course in numerical analysis and for faculty who teach numerical analysis. The important ideas in the paper are O notation, floating point precision, backward error in linear solvers, and Newton's method.
更多
查看译文
关键词
Newton's method,mixed precision arithmetic,backward error,probabilistic rounding analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要