Eigendecomposition of a matrix

From testwiki
Jump to navigation Jump to search

Template:Short descriptionTemplate:Jargon

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Fundamental theory of matrix eigenvectors and eigenvalues

Template:See also

A (nonzero) vector Template:Math of dimension Template:Mvar is an eigenvector of a square Template:Math matrix Template:Math if it satisfies a linear equation of the form ๐€๐ฏ=λ๐ฏ for some scalar Template:Mvar. Then Template:Mvar is called the eigenvalue corresponding to Template:Math. Geometrically speaking, the eigenvectors of Template:Math are the vectors that Template:Math merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.

This yields an equation for the eigenvalues p(λ)=det(๐€λ๐ˆ)=0. We call Template:Math the characteristic polynomial, and the equation, called the characteristic equation, is an Template:Mvarth-order polynomial equation in the unknown Template:Mvar. This equation will have Template:Mvar distinct solutions, where Template:Math. The set of solutions, that is, the eigenvalues, is called the spectrum of Template:Math.[1][2][3]

If the field of scalars is algebraically closed, then we can factor Template:Mvar as p(λ)=(λλ1)n1(λλ2)n2(λλNλ)nNλ=0. The integer Template:Mvar is termed the algebraic multiplicity of eigenvalue Template:Mvar. The algebraic multiplicities sum to Template:Mvar: i=1Nλni=N.

For each eigenvalue Template:Mvar, we have a specific eigenvalue equation (๐€λi๐ˆ)๐ฏ=0. There will be Template:Math linearly independent solutions to each eigenvalue equation. The linear combinations of the Template:Math solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue Template:Math. The integer Template:Math is termed the geometric multiplicity of Template:Math. It is important to keep in mind that the algebraic multiplicity Template:Math and geometric multiplicity Template:Math may or may not be equal, but we always have Template:Math. The simplest case is of course when Template:Math. The total number of linearly independent eigenvectors, Template:Math, can be calculated by summing the geometric multiplicities i=1Nλmi=N๐ฏ.

The eigenvectors can be indexed by eigenvalues, using a double index, with Template:Math being the Template:Mvarth eigenvector for the Template:Mvarth eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index Template:Math, with Template:Math.

Eigendecomposition of a matrix

Let Template:Math be a square Template:Math matrix with Template:Mvar linearly independent eigenvectors Template:Mvar (where Template:Math). Then Template:Math can be factored as ๐€=๐Λ๐1 where Template:Math is the square Template:Math matrix whose Template:Mvarth column is the eigenvector Template:Mvar of Template:Math, and Template:Math is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, Template:Math. Note that only diagonalizable matrices can be factorized in this way. For example, the defective matrix [1101] (which is a shear matrix) cannot be diagonalized.

The Template:Mvar eigenvectors Template:Mvar are usually normalized, but they don't have to be. A non-normalized set of Template:Mvar eigenvectors, Template:Mvar can also be used as the columns of Template:Math. That can be understood by noting that the magnitude of the eigenvectors in Template:Math gets canceled in the decomposition by the presence of Template:Math. If one of the eigenvalues Template:Math has multiple linearly independent eigenvectors (that is, the geometric multiplicity of Template:Math is greater than 1), then these eigenvectors for this eigenvalue Template:Math can be chosen to be mutually orthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if Template:Math is a normal matrix, then by the spectral theorem, it's always possible to diagonalize Template:Math in an orthonormal basis Template:Mvar.

The decomposition can be derived from the fundamental property of eigenvectors: ๐€๐ฏ=λ๐ฏ๐€๐=๐Λ๐€=๐Λ๐1. The linearly independent eigenvectors Template:Mvar with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products Template:Math, for Template:Math, which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix Template:Math. The number of linearly independent eigenvectors Template:Mvar with nonzero eigenvalues is equal to the rank of the matrix Template:Math, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.

The linearly independent eigenvectors Template:Mvar with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation Template:Math.

Example

The 2 ร— 2 real matrix Template:Math ๐€=[1013] may be decomposed into a diagonal matrix through multiplication of a non-singular matrix Template:Math ๐=[abcd]โ„2×2.

Then [abcd]1[1013][abcd]=[x00y], for some real diagonal matrix [x00y].

Multiplying both sides of the equation on the left by Template:Math: [1013][abcd]=[abcd][x00y]. The above equation can be decomposed into two simultaneous equations: {[1013][ac]=[axcx][1013][bd]=[bydy]. Factoring out the eigenvalues Template:Mvar and Template:Mvar: {[1013][ac]=x[ac][1013][bd]=y[bd] Letting ๐š=[ac],๐›=[bd], this gives us two vector equations: {๐€๐š=x๐š๐€๐›=y๐› And can be represented by a single vector equation involving two solutions as eigenvalues: ๐€๐ฎ=λ๐ฎ where Template:Mvar represents the two eigenvalues Template:Mvar and Template:Mvar, and Template:Math represents the vectors Template:Math and Template:Math.

Shifting Template:Math to the left hand side and factoring Template:Math out (๐€λ๐ˆ)๐ฎ=๐ŸŽ Since Template:Math is non-singular, it is essential that Template:Math is nonzero. Therefore, det(๐€λ๐ˆ)=0 Thus (1λ)(3λ)=0 giving us the solutions of the eigenvalues for the matrix Template:Math as Template:Math or Template:Math, and the resulting diagonal matrix from the eigendecomposition of Template:Math is thus Template:Nowrap

Putting the solutions back into the above simultaneous equations {[1013][ac]=1[ac][1013][bd]=3[bd]

Solving the equations, we have a=2candb=0,c,dโ„. Thus the matrix Template:Math required for the eigendecomposition of Template:Math is ๐=[2c0cd],c,dโ„, that is: [2c0cd]1[1013][2c0cd]=[1003],c,dโ„

Matrix inverse via eigendecomposition

Template:Main

If a matrix Template:Math can be eigendecomposed and if none of its eigenvalues are zero, then Template:Math is invertible and its inverse is given by ๐€1=๐Λ1๐1 If ๐€ is a symmetric matrix, since ๐ is formed from the eigenvectors of ๐€, ๐ is guaranteed to be an orthogonal matrix, therefore ๐1=๐T. Furthermore, because Template:Math is a diagonal matrix, its inverse is easy to calculate: [Λ1]ii=1λi

Practical implications

When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.[4]

Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.

The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.

The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.

The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).

If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues:[5] min|2λs| where the eigenvalues are subscripted with an Template:Math to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.

Functional calculus

The eigendecomposition allows for much easier computation of power series of matrices. If Template:Math is given by f(x)=a0+a1x+a2x2+ then we know that f(๐€)=๐f(Λ)๐1 Because Template:Math is a diagonal matrix, functions of Template:Math are very easy to calculate: [f(Λ)]ii=f(λi)

The off-diagonal elements of Template:Math are zero; that is, Template:Math is also a diagonal matrix. Therefore, calculating Template:Math reduces to just calculating the function on each of the eigenvalues.

A similar technique works more generally with the holomorphic functional calculus, using ๐€1=๐Λ1๐1 from above. Once again, we find that [f(Λ)]ii=f(λi)

Examples

๐€2=(๐Λ๐1)(๐Λ๐1)=๐Λ(๐1๐)Λ๐1=๐Λ2๐1๐€n=๐Λn๐1exp๐€=๐exp(Λ)๐1 which are examples for the functions f(x)=x2,f(x)=xn,f(x)=expx. Furthermore, exp๐€ is the matrix exponential.

Decomposition for spectral matrices

Template:Main Template:Expand section Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.[6]

Normal matrices

A complex-valued square matrix A is normal (meaning , ๐€*๐€=๐€๐€*, where ๐€* is the conjugate transpose) if and only if it can be decomposed as ๐€=๐”Λ๐”*, where ๐” is a unitary matrix (meaning ๐”*=๐”1) and Λ= diag(λ1,,λn) is a diagonal matrix.[7] The columns ๐ฎ1,,๐ฎn of ๐” form an orthonormal basis and are eigenvectors of ๐€ with corresponding eigenvalues λ1,,λn.[8]

For example, consider the 2 x 2 normal matrix ๐€=[1221].

The eigenvalues are λ1=3 and λ2=1.

The (normalized) eigenvectors corresponding to these eigenvalues are ๐ฎ1=12[11] and ๐ฎ2=12[11].

The diagonalization is ๐€=๐”Λ๐”*, where ๐”=[1/21/21/21/2], Λ=[3001] and ๐”*=๐”1=[1/21/21/21/2].

The verification is ๐”Λ๐”*=[1/21/21/21/2][3001][1/21/21/21/2]=[1221]=๐€.

This example illustrates the process of diagonalizing a normal matrix

๐€

by finding its eigenvalues and eigenvectors, forming the unitary matrix

๐”

, the diagonal matrix

Λ

, and verifying the decomposition.

Subsets of important classes of matrices

Real symmetric matrices

As a special case, for every Template:Math real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix Template:Math can be decomposed as ๐€=๐Λ๐๐–ณ, where Template:Math is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of Template:Math, and Template:Math is a diagonal matrix whose entries are the eigenvalues of Template:Math.[9]

Diagonalizable matrices

Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as๐€=๐๐ƒ๐1, where ๐ is a matrix whose columns are eigenvectors of ๐€ and ๐ƒ is a diagonal matrix consisting of the corresponding eigenvalues of ๐€.[8]

Positive definite matrices

Positive definite matrices are matrices for which all eigenvalues are positive. They can be decomposed as ๐€=๐‹๐‹๐–ณ using the Cholesky decomposition, where ๐‹ is a lower triangular matrix.[10]

Unitary and Hermitian matrices

Unitary matrices satisfy ๐”๐”*=๐ˆ (real case) or ๐”๐”=๐ˆ (complex case), where ๐”*denotes the conjugate transpose and ๐”denotes the conjugate transpose. They diagonalize using unitary transformations.[8]

Hermitian matrices satisfy ๐‡=๐‡, where ๐‡denotes the conjugate transpose. They can be diagonalized using unitary or orthogonal matrices.[8]

Useful facts

Useful facts regarding eigenvalues

Useful facts regarding eigenvectors

  • If Template:Math is Hermitian and full-rank, the basis of eigenvectors may be chosen to be mutually orthogonal. The eigenvalues are real.
  • The eigenvectors of Template:Math are the same as the eigenvectors of Template:Math.
  • Eigenvectors are only defined up to a multiplicative constant. That is, if Template:Math then Template:Math is also an eigenvector for any scalar Template:Math. In particular, Template:Math and Template:Math (for any ฮธ) are also eigenvectors.
  • In the case of degenerate eigenvalues (an eigenvalue having more than one eigenvector), the eigenvectors have an additional freedom of linear transformation, that is to say, any linear (orthonormal) combination of eigenvectors sharing an eigenvalue (in the degenerate subspace) is itself an eigenvector (in the subspace).

Useful facts regarding eigendecomposition

Useful facts regarding matrix inverse

Numerical computations

Template:Details

Numerical computation of eigenvalues

Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.

In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abelโ€“Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using Template:Mvarth roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative.

Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.[11]

A simple and accurate iterative method is the power method: a random vector Template:Math is chosen and a sequence of unit vectors is computed as ๐€๐ฏ๐€๐ฏ,๐€2๐ฏ๐€2๐ฏ,๐€3๐ฏ๐€3๐ฏ,

This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that Template:Math has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine.[12] Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of all the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration.[11] Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.[11]

Numerical computation of eigenvectors

Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation (๐€λi๐ˆ)๐ฏi,j=๐ŸŽ using Gaussian elimination or any other method for solving matrix equations.

However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector).[11] In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Template:Math matrices from the steps in the algorithm.[11] (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.[13]) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[11]

Additional topics

Generalized eigenspaces

Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of Template:Math. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix Template:Math for any sufficiently large Template:Mvar. That is, it is the space of generalized eigenvectors (first sense), where a generalized eigenvector is any vector which eventually becomes 0 if Template:Math is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.

This usage should not be confused with the generalized eigenvalue problem described below.

Conjugate eigenvector

A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is ๐€๐ฏ=λ๐ฏ*. For example, in coherent electromagnetic scattering theory, the linear transformation Template:Math represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.

Generalized eigenvalue problem

A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector Template:Math that obeys ๐€๐ฏ=λ๐๐ฏ where Template:Math and Template:Math are matrices. If Template:Math obeys this equation, with some Template:Mvar, then we call Template:Math the generalized eigenvector of Template:Math and Template:Math (in the second sense), and Template:Mvar is called the generalized eigenvalue of Template:Math and Template:Math (in the second sense) which corresponds to the generalized eigenvector Template:Math. The possible values of Template:Mvar must obey the following equation det(๐€λ๐)=0.

If Template:Math linearly independent vectors Template:Math can be found, such that for every Template:Math, Template:Math, then we define the matrices Template:Math and Template:Math such that P=[||๐ฏ1๐ฏn||][(๐ฏ1)1(๐ฏn)1(๐ฏ1)n(๐ฏn)n] (D)ij={λi,if i=j0,otherwise Then the following equality holds ๐€=๐๐๐ƒ๐1 And the proof is ๐€๐=๐€[||๐ฏ1๐ฏn||]=[||A๐ฏ1A๐ฏn||]=[||λ1B๐ฏ1λnB๐ฏn||]=[||B๐ฏ1B๐ฏn||]๐ƒ=๐๐๐ƒ

And since Template:Math is invertible, we multiply the equation from the right by its inverse, finishing the proof.

The set of matrices of the form Template:Math, where Template:Mvar is a complex number, is called a pencil; the term matrix pencil can also refer to the pair Template:Math of matrices.[14]

If Template:Math is invertible, then the original problem can be written in the form ๐1๐€๐ฏ=λ๐ฏ which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if Template:Math and Template:Math are Hermitian matrices, since in this case Template:Math is not generally Hermitian and important properties of the solution are no longer apparent.

If Template:Math and Template:Math are both symmetric or Hermitian, and Template:Math is also a positive-definite matrix, the eigenvalues Template:Math are real and eigenvectors Template:Math and Template:Math with distinct eigenvalues are Template:Math-orthogonal (Template:Math).[15] In this case, eigenvectors can be chosen so that the matrix Template:Math defined above satisfies ๐*๐๐=๐ˆ or ๐๐*๐=๐ˆ, and there exists a basis of generalized eigenvectors (it is not a defective problem).[14] This case is sometimes called a Hermitian definite pencil or definite pencil.[14]

See also

Notes

Template:Reflist

References

Template:Refbegin

Template:Refend