Hilbert–Schmidt theorem

From testwiki
Revision as of 04:54, 30 November 2024 by imported>Citation bot (Altered isbn. Upgrade ISBN10 to 13. | Use this bot. Report bugs. | Suggested by Abductive | Category:Operator theory | #UCB_Category 112/151)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

In mathematical analysis, the Hilbert–Schmidt theorem, also known as the eigenfunction expansion theorem, is a fundamental result concerning compact, self-adjoint operators on Hilbert spaces. In the theory of partial differential equations, it is very useful in solving elliptic boundary value problems.

Statement of the theorem

Let (H, ⟨ , ⟩) be a real or complex Hilbert space and let A : H → H be a bounded, compact, self-adjoint operator. Then there is a sequence of non-zero real eigenvalues λi, i = 1, …, N, with N equal to the rank of A, such that |λi| is monotonically non-increasing and, if N = +∞, limi+λi=0.

Furthermore, if each eigenvalue of A is repeated in the sequence according to its multiplicity, then there exists an orthonormal set φi, i = 1, …, N, of corresponding eigenfunctions, i.e., Aφi=λiφi for i=1,,N.

Moreover, the functions φi form an orthonormal basis for the range of A and A can be written as Au=i=1Nλiφi,uφi for all uH.

References

Template:Functional analysis