Conjugate residual method

From testwiki
Jump to navigation Jump to search

The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.

This method is used to solve linear equations of the form

𝐀𝐱=𝐛

where A is an invertible and Hermitian matrix, and b is nonzero.

The conjugate residual method differs from the closely related conjugate gradient method. It involves more numerical operations and requires more storage.

Given an (arbitrary) initial estimate of the solution 𝐱0, the method is outlined below:

𝐱0:=Some initial guess𝐫0:=𝐛𝐀𝐱0𝐩0:=𝐫0Iterate, with k starting at 0:αk:=𝐫kT𝐀𝐫k(𝐀𝐩k)T𝐀𝐩k𝐱k+1:=𝐱k+αk𝐩k𝐫k+1:=𝐫kαk𝐀𝐩kβk:=𝐫k+1T𝐀𝐫k+1𝐫kT𝐀𝐫k𝐩k+1:=𝐫k+1+βk𝐩k𝐀𝐩k+1:=𝐀𝐫k+1+βk𝐀𝐩kk:=k+1

the iteration may be stopped once 𝐱k has been deemed converged. The only difference between this and the conjugate gradient method is the calculation of αk and βk (plus the optional incremental calculation of 𝐀𝐩k at the end).

Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration.

Preconditioning

By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:

𝐱0:=Some initial guess𝐫0:=𝐌1(𝐛𝐀𝐱0)𝐩0:=𝐫0Iterate, with k starting at 0:αk:=𝐫kT𝐀𝐫k(𝐀𝐩k)T𝐌1𝐀𝐩k𝐱k+1:=𝐱k+αk𝐩k𝐫k+1:=𝐫kαk𝐌1𝐀𝐩kβk:=𝐫k+1T𝐀𝐫k+1𝐫kT𝐀𝐫k𝐩k+1:=𝐫k+1+βk𝐩k𝐀𝐩k+1:=𝐀𝐫k+1+βk𝐀𝐩kk:=k+1

The preconditioner 𝐌1 must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning.

References