Biconjugate gradient method

From testwiki
Jump to navigation Jump to search

Template:No footnotes

In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations

Ax=b.

Unlike the conjugate gradient method, this algorithm does not require the matrix A to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose Template:Math.

The Algorithm

  1. Choose initial guess x0, two other vectors x0 and b and a preconditioner M
  2. r0bAx0
  3. r0bx0A
  4. p0M1r0
  5. p0r0M1
  6. for k=0,1, do
    1. αkrkM1rkpkApk
    2. xk+1xk+αkpk
    3. xk+1xk+αkpk
    4. rk+1rkαkApk
    5. rk+1rkαkpkA
    6. βkrk+1M1rk+1rkM1rk
    7. pk+1M1rk+1+βkpk
    8. pk+1rk+1M1+βkpk

In the above formulation, the computed rk and rk satisfy

rk=bAxk,
rk=bxkA

and thus are the respective residuals corresponding to xk and xk, as approximate solutions to the systems

Ax=b,
xA=b;

x is the adjoint, and α is the complex conjugate.

Unpreconditioned version of the algorithm

  1. Choose initial guess x0,
  2. r0bAx0
  3. r^0b^x^0A
  4. p0r0
  5. p^0r^0
  6. for k=0,1, do
    1. αkr^krkp^kApk
    2. xk+1xk+αkpk
    3. x^k+1x^k+αkp^k
    4. rk+1rkαkApk
    5. r^k+1r^kαkp^kA
    6. βkr^k+1rk+1r^krk
    7. pk+1rk+1+βkpk
    8. p^k+1r^k+1+βkp^k

Discussion

The biconjugate gradient method is numerically unstableTemplate:Citation needed (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by

xk:=xj+PkA1(bAxj),
xk:=xj+(bxjA)PkA1,

where j<k using the related projection

Pk:=𝐮k(𝐯kA𝐮k)1𝐯kA,

with

𝐮k=[u0,u1,,uk1],
𝐯k=[v0,v1,,vk1].

These related projections may be iterated themselves as

Pk+1=Pk+(1Pk)ukvkA(1Pk)vkA(1Pk)uk.

A relation to Quasi-Newton methods is given by Pk=Ak1A and xk+1=xkAk+11(Axkb), where

Ak+11=Ak1+(1Ak1A)ukvk(1AAk1)vkA(1Ak1A)uk.

The new directions

pk=(1Pk)uk,
pk=vkA(1Pk)A1

are then orthogonal to the residuals:

virk=pirk=0,
rkuj=rkpj=0,

which themselves satisfy

rk=A(1Pk)A1rj,
rk=rj(1Pk)

where i,j<k.

The biconjugate gradient method now makes a special choice and uses the setting

uk=M1rk,
vk=rkM1.

With this particular choice, explicit evaluations of Pk and Template:Math are avoided, and the algorithm takes the form stated above.

Properties

  • If A=A is self-adjoint, x0=x0 and b=b, then rk=rk, pk=pk, and the conjugate gradient method produces the same sequence xk=xk at half the computational cost.
  • The sequences produced by the algorithm are biorthogonal, i.e., piApj=riM1rj=0 for ij.
  • if Pj is a polynomial with deg(Pj)+j<k, then rkPj(M1A)uj=0. The algorithm thus produces projections onto the Krylov subspace.
  • if Pi is a polynomial with i+deg(Pi)<k, then viPi(AM1)rk=0.

See also

References

Template:Numerical linear algebra