Power iteration

From testwiki
Jump to navigation Jump to search

Template:Short description Template:Use dmy dates In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number λ, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vector v, which is a corresponding eigenvector of λ, that is, Av=λv. The algorithm is also known as the Von Mises iteration.[1]

Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix A by a vector, so it is effective for a very large sparse matrix with appropriate implementation. The speed of convergence is like (λ1/λ2)k(see a later section). In words, convergence is exponential with base being the spectral gap.

The method

File:Animation of the Power Iteration Algorithm.gif
Animation that visualizes the power iteration algorithm on a 2x2 matrix. The matrix is depicted by its two eigenvectors. Error is computed as ||approximationlargest eigenvector||

The power iteration algorithm starts with a vector b0, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation

bk+1=AbkAbk

So, at every iteration, the vector bk is multiplied by the matrix A and normalized.

If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence (bk) converges to an eigenvector associated with the dominant eigenvalue.

Without the two assumptions above, the sequence (bk) does not necessarily converge. In this sequence,

bk=eiϕkv1+rk,

where v1 is an eigenvector associated with the dominant eigenvalue, and rk0. The presence of the term eiϕk implies that (bk) does not converge unless eiϕk=1. Under the two assumptions listed above, the sequence (μk) defined by

μk=bk*Abkbk*bk

converges to the dominant eigenvalue (with Rayleigh quotient).Template:Clarify

One may compute this with the following algorithm (shown in Python with NumPy):

#!/usr/bin/env python3

import numpy as np

def power_iteration(A, num_iterations: int):
    # Ideally choose a random vector
    # To decrease the chance that our vector
    # Is orthogonal to the eigenvector
    b_k = np.random.rand(A.shape[1])

    for _ in range(num_iterations):
        # calculate the matrix-by-vector product Ab
        b_k1 = np.dot(A, b_k)

        # calculate the norm
        b_k1_norm = np.linalg.norm(b_k1)

        # re normalize the vector
        b_k = b_k1 / b_k1_norm

    return b_k

power_iteration(np.array([[0.5, 0.5], [0.2, 0.8]]), 10)

The vector bk converges to an associated eigenvector. Ideally, one should use the Rayleigh quotient in order to get the associated eigenvalue.

This algorithm is used to calculate the Google PageRank.

The method can also be used to calculate the spectral radius (the eigenvalue with the largest magnitude, for a square matrix) by computing the Rayleigh quotient

ρ(A)=max{|λ1|,,|λn|}=bkAbkbkbk.

Analysis

Let A be decomposed into its Jordan canonical form: A=VJV1, where the first column of V is an eigenvector of A corresponding to the dominant eigenvalue λ1. Since generically, the dominant eigenvalue of A is unique, the first Jordan block of J is the 1×1 matrix [λ1], where λ1 is the largest eigenvalue of A in magnitude. The starting vector b0 can be written as a linear combination of the columns of V:

b0=c1v1+c2v2++cnvn.

By assumption, b0 has a nonzero component in the direction of the dominant eigenvalue, so c10.

The computationally useful recurrence relation for bk+1 can be rewritten as:

bk+1=AbkAbk=Ak+1b0Ak+1b0,

where the expression: Ak+1b0Ak+1b0 is more amenable to the following analysis.

bk=Akb0Akb0=(VJV1)kb0(VJV1)kb0=VJkV1b0VJkV1b0=VJkV1(c1v1+c2v2++cnvn)VJkV1(c1v1+c2v2++cnvn)=VJk(c1e1+c2e2++cnen)VJk(c1e1+c2e2++cnen)=(λ1|λ1|)kc1|c1|v1+1c1V(1λ1J)k(c2e2++cnen)v1+1c1V(1λ1J)k(c2e2++cnen)

The expression above simplifies as k

(1λ1J)k=[[1](1λ1J2)k(1λ1Jm)k][100]ask.

The limit follows from the fact that the eigenvalue of 1λ1Ji is less than 1 in magnitude, so

(1λ1Ji)k0ask.

It follows that:

1c1V(1λ1J)k(c2e2++cnen)0ask

Using this fact, bk can be written in a form that emphasizes its relationship with v1 when k is large:

bk=(λ1|λ1|)kc1|c1|v1+1c1V(1λ1J)k(c2e2++cnen)v1+1c1V(1λ1J)k(c2e2++cnen)=eiϕkc1|c1|v1v1+rk

where eiϕk=(λ1/|λ1|)k and rk0 as k

The sequence (bk) is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence (bk) may not converge, bk is nearly an eigenvector of A for large k.

Alternatively, if A is diagonalizable, then the following proof yields the same result

Let λ1, λ2, ..., λm be the m eigenvalues (counted with multiplicity) of A and let v1, v2, ..., vm be the corresponding eigenvectors. Suppose that λ1 is the dominant eigenvalue, so that |λ1|>|λj| for j>1.

The initial vector b0 can be written:

b0=c1v1+c2v2++cmvm.

If b0 is chosen randomly (with uniform probability), then c1 ≠ 0 with probability 1. Now,

Akb0=c1Akv1+c2Akv2++cmAkvm=c1λ1kv1+c2λ2kv2++cmλmkvm=c1λ1k(v1+c2c1(λ2λ1)kv2++cmc1(λmλ1)kvm)c1λ1kv1|λjλ1|<1 for j>1

On the other hand:

bk=Akb0Akb0.

Therefore, bk converges to (a multiple of) the eigenvector v1. The convergence is geometric, with ratio

|λ2λ1|,

where λ2 denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.

Applications

Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine,[2] and Twitter uses it to show users recommendations of whom to follow.[3] The power iteration method is especially suitable for sparse matrices, such as the web matrix, or as the matrix-free method that does not require storing the coefficient matrix A explicitly, but can instead access a function evaluating matrix-vector products Ax. For non-symmetric matrices that are well-conditioned the power iteration method can outperform more complex Arnoldi iteration. For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.

Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix A1. Other algorithms look at the whole subspace generated by the vectors bk. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration. Gram iteration[4] is a super-linear and deterministic method to compute the largest eigenpair.

See also

References

Template:Reflist

Template:Numerical linear algebra

  1. Richard von Mises and H. Pollaczek-Geiringer, Praktische Verfahren der Gleichungsauflösung, ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik 9, 152-164 (1929).
  2. Template:Cite news
  3. Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web
  4. Template:Citation