Iterative refinement is a technique for enhancing the standard of an approximate answer to a linear system of equations
, the place
is an
nonsingular matrix. The fundamental iterative refinement algorithm could be very easy.
- Compute the residual
.
- Resolve
.
- Replace
.
- Repeat from step 1 if mandatory.
At first sight, this algorithm appears as costly as fixing for the unique . Nonetheless, often the solver is LU factorization with pivoting,
(the place we embody the permutations in
). A lot of the work is within the LU factorization, which prices
flops, and every iteration requires a multiplication with
for the residual and two substitutions to compute
, which whole solely
flops. If the refinement converges rapidly then it’s cheap.
Turning to the error, with a steady LU factorization the preliminary computed in floating-point arithmetic of precision
satisfies (omitting constants)
the place is the matrix situation quantity within the
-norm. We wish refinement to provide an answer correct to precision
:
But when the solver can not compute the preliminary precisely when
is ill-conditioned why ought to it be capable to produce an replace
that improves
?
The only reply is that when iterative refinement was first used on digital computer systems the residual was computed at twice the working precision, which may very well be completed at no further price within the {hardware}. If
is an affordable approximation then we anticipate cancellation in forming
, so utilizing further precision in forming
ensures that
has sufficient appropriate digits to yield a correction
that improves
. This type of iterative refinement produces an answer satisfying (2) so long as
.
Here’s a MATLAB instance, the place the working precision is single and residuals are computed in double precision.
n = 8; A = single(gallery('frank',n)); xact = ones(n,1); b = A*xact; % b is shaped precisely for small n. x = Ab; fprintf('Initial_error = %4.1en', norm(x - xact,inf)) r = single( double(b) - double(A)*double(x) ); d = Ar; x = x + d; fprintf('Second error = %4.1en', norm(x - xact,inf))
The output is
Initial_error = 9.1e-04 Second error = 6.0e-08
which exhibits that after only one step the error has been introduced down from to the extent of
, the unit roundoff for IEEE single precision arithmetic.
Fastened Precision Iterative Refinement
By the Nineteen Seventies, computer systems had began to lose the power to cheaply accumulate inside merchandise in further precision, and additional precision couldn’t be programmed portably in software program. It was found, although, that even when iterative refinement is run totally in a single precision it could convey advantages when . Particularly,
- if the solver is considerably numerically unstable the instability is cured by the refinement, in {that a} relative residual satisfying
is produced, and
- a relative error satisfying
is produced, the place
The sure (4) is stronger than (1) as a result of is not any bigger than
and might be a lot smaller, particularly if
has badly scaled rows.
Low Precision Factorization
References
We give 5 references, which include hyperlinks to the sooner literature.
- Erin Carson and Nicholas J. Higham. Accelerating the Answer of Linear Methods by Iterative Refinement in Three Precisions. SIAM J. Sci. Comput., 40(2):A817–A847,2018.
- Azzam Haidar, Harun Bayraktar, Stanimire Tomov, Jack Dongarra, and Nicholas J. Higham. Blended-Precision Iterative Refinement Utilizing Tensor Cores on GPUs to Speed up Answer Of linear programs. Proc. Roy. Soc. London A, 476(2243):20200110, 2020.
- Nicholas J. Higham and Theo Mary. Blended Precision Algorithms in Numerical Linear Algebra. Acta Numerica, 31:347–414, 2022.
- Nicholas J. Higham and Dennis Sherwood, Methods to Increase Your Creativity, SIAM Information, 55(5):1, 3, 2022. (Explains how developments in iterative refinement 1948–2022 correspond to asking “how may this be completely different” about every side of the algorithm.)
- Julie Langou, Julien Langou, Piotr Luszczek, Jakub Kurzak, Alfredo Buttari, and Jack Dongarra. Exploiting the Efficiency of 32 Bit Floating Level Arithmetic in Acquiring 64 Bit Accuracy (Revisiting Iterative Refinement for Linear Methods). In Proceedings of the 2006 ACM/IEEE Convention on Supercomputing, IEEE, November 2006.