Random matrix

From testwiki
Jump to navigation Jump to search

Template:Short description In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms,[1][2] the thermal conductivity of a lattice, or the emergence of quantum chaos,[3] can be modeled mathematically as problems concerning large, random matrices.

Applications

Physics

In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms.[1][2] Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[4] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation.

In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.[3]

In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model).[5] Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters).[6]

Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics,[7] quantum gravity in two dimensions,[8] mesoscopic physics,[9] spin-transfer torque,[10] the fractional quantum Hall effect,[11] Anderson localization,[12] quantum dots,[13] and superconductors[14]

Mathematical statistics and numerical analysis

In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples.[15] Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random Hermitian matrices.[16] Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in high-dimensional statistics. Random matrix theory also saw applications in neuronal networks[17] and deep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.[18]

In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[19] to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.[20]

Number theory

In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices.[21] The connection was first discovered by Hugh Montgomery and Freeman Dyson. It is connected to the Hilbert–Pólya conjecture.

Free probability

The relation of free probability with random matrices[22] is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu;[23] he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.

Computational neuroscience

In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos[24] when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation[25][26] and time to synchrony depends on network topology.[27][28]

In the analysis of massive data such as fMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as PCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the Marchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI [29][30]).

Optimal control

In optimal control theory, the evolution of n state variables through time depends at any time on their own values and on the values of k control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of stochastic control.[31]Template:Rp[32] A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.

Computational mechanics

In computational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense,[33] with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.

Engineering

Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems.Template:Citation needed

History

Template:Expand section Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments by Enrico Fermi and others demonstrated evidence that individual nucleons cannot be approximated to move independently, leading Niels Bohr to formulate the idea of a compound nucleus. Because there was no knowledge of direct nucleon-nucleon interactions, Eugene Wigner and Leonard Eisenbud approximated that the nuclear Hamiltonian could be modeled as a random matrix. For larger atoms, the distribution of the energy eigenvalues of the Hamiltonian could be computed in order to approximate scattering cross sections by invoking the Wishart distribution.[34]

Gaussian ensembles

The most-commonly studied random matrix distributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE. This index counts the number of real components per matrix element.

Definitions

The Gaussian unitary ensemble GUE(n) is described by the Gaussian measure with density 1ZGUE(n)en2trH2 on the space of n×n Hermitian matrices H=(Hij)i,j=1n. Here ZGUE(n)=2n/2(πn)12n2 is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.

The Gaussian orthogonal ensemble GOE(n) is described by the Gaussian measure with density 1ZGOE(n)en4trH2 on the space of n × n real symmetric matrices H = (Hij)Template:Su. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by H=(G+GT)/2n, where G is an n×n matrix with IID samples from the standard normal distribution.

The Gaussian symplectic ensemble GSE(n) is described by the Gaussian measure with density 1ZGSE(n)entrH2 on the space of n × n Hermitian quaternionic matrices, e.g. symmetric square matrices composed of quaternions, Template:Math. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.

Point correlation functions

The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given by HijHmn*=HijHnm=1nδimδjn+2βnβδinδjm, from which all higher correlations follow by Isserlis' theorem.

Moment generating functions

The moment generating function for the GOE isE[etr(VH)]=e14NV+VTF2where F is the Frobenius norm.

Spectral density

File:Spectral density of gaussian ensembels, N = 1 to 32.png
Spectral density of GOE/GUE/GSE, as N=20,21,...,25. They are normalized so that the distributions converge to the semicircle distribution. The number of "humps" is equal to N.

The joint probability density for the eigenvalues Template:Math of GUE/GOE/GSE is given by Template:NumBlk where Zβ,n is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of βth order) for coinciding eigenvalues λj=λi, and Z2,n=(2π)n/2k=1nk!.

More succinctly, 1Zβ,neβ4λ22|Δn(λ)|βwhere Δn is the Vandermonde determinant.

The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable.[35] They converge to the Tracy–Widom distribution after shifting and scaling appropriately.

Convergence to Wigner semicircular distribution

The spectrum, divided by Nσ2, converges in distribution to the semicircular distribution on the interval [2,+2]: ρ(x)=12π4x2. Here σ2 is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter.

Distribution of level spacings

From the ordered sequence of eigenvalues λ1<<λn<λn+1<, one defines the normalized spacings s=(λn+1λn)/s, where s=λn+1λn is the mean spacing. The probability distribution of spacings is approximately given by, p1(s)=π2seπ4s2 for the orthogonal ensemble GOE β=1, p2(s)=32π2s2e4πs2 for the unitary ensemble GUE β=2, and p4(s)=21836π3s4e649πs2 for the symplectic ensemble GSE β=4.

The numerical constants are such that pβ(s) is normalized: 0dspβ(s)=1 and the mean spacing is, 0dsspβ(s)=1, for β=1,2,4.

Generalizations

Wigner matrices are random Hermitian matrices Hn=(Hn(i,j))i,j=1n such that the entries {Hn(i,j),1ijn} above the main diagonal are independent random variables with zero mean and have identical second moments.

The Gaussian ensembles can be extended for β1,2,4 using the Dumitriu-Edelman tridiagonal ensemble.[36]

Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form 1ZnenV(tr(H)), where the function Template:Math is called the potential.

The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.[37][38]

Spectral theory of random matrices

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity. [39]

Empirical spectral measure

The empirical spectral measure Template:Math of Template:Math is defined byμH(A)=1n#{eigenvalues of H in A}=N1A,H,A.

Usually, the limit of μH is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).

Alternative expressions

μH(A)=1niδλi

Types of convergence

Given a matrix ensemble, we say that its spectral measures converge weakly to ρ iff for any measurable set A, the ensemble-average converges:limn𝔼H[μH(A)]=ρ(A)Convergence weakly almost surely: If we sample H1,H2,H3, independently from the ensemble, then with probability 1,limnμHn(A)=ρ(A)for any measurable set A.

In another sense, weak almost sure convergence means that we sample H1,H2,H3,, not independently, but by "growing" (a stochastic process), then with probability 1, limnμHn(A)=ρ(A) for any measurable set A.

For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows:

  • Sample an infinite doubly infinite sequence of standard random variables {Gi,j}i,j=1,2,3,.
  • Define each Hn=(Gn+GnT)/2n where Gn is the matrix made of entries {Gi,j}i,j=1,2,,n.

Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow.

Global regime

In the global regime, one is interested in the distribution of linear statistics of the form Nf,H=n1trf(H).

The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wigner semicircle distribution and Wigner surmise. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.[40][41]

The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.[42]

Fluctuations

For the linear statistics Template:Math, one is also interested in the fluctuations about ∫ f(λdN(λ). For many classes of random matrices, a central limit theorem of the form Nf,Hf(λ)dN(λ)σf,nDN(0,1) is known.[43][44]

The variational problem for the unitary ensembles

Consider the measure

dμN(μ)=1Z~NeHN(λ)dλ,HN(λ)=jkln|λjλk|+Nj=1NQ(λj),

where Q(M) is the potential of the ensemble and let ν be the empirical spectral measure.

We can rewrite HN(λ) with ν as

HN(λ)=N2[xyln|xy|dν(x)dν(y)+Q(x)dν(x)],

the probability measure is now of the form

dμN(μ)=1Z~NeN2IQ(ν)dλ,

where IQ(ν) is the above functional inside the squared brackets.

Let now

M1()={ν:ν0, dν=1}

be the space of one-dimensional probability measures and consider the minimizer

EQ=inf\limits νM1()xyln|xy|dν(x)dν(y)+Q(x)dν(x).

For EQ there exists a unique equilibrium measure νQ through the Euler-Lagrange variational conditions for some real constant l

2log|xy|dν(y)Q(x)=l,xJ
2log|xy|dν(y)Q(x)l,xJ

where J=j=1q[aj,bj] is the support of the measure and define

q(x)=(Q(x)2)2+Q(x)Q(y)xydνQ(y).

The equilibrium measure νQ has the following Radon–Nikodym density

dνQ(x)dx=1πq(x).[45]

Mesoscopic regime

[46][47] The typical statement of the Wigner semicircular law is equivalent to the following statement: For each fixed interval [λ0Δλ,λ0+Δλ] centered at a point λ0, as N, the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to [λ0Δλ,λ0+Δλ]ρ(t)dt, where ρ(t) is the density of the semicircular distribution.

If Δλ can be allowed to decrease as N increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime".

The mesoscopic regime is intermediate between the local and the global. In the mesoscopic regime, one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside .

For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areas uinside the unite disk, if the disks have area An=O(n1+ϵ), the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks.[47]

Local regime

In the local regime, one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remains O(1).

Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.

Bulk statistics

Formally, fix λ0 in the interior of the support of N(λ). Then consider the point process Ξ(λ0)=jδ(nρ(λ0)(λjλ0)), where λj are the eigenvalues of the random matrix.

The point process Ξ(λ0) captures the statistical properties of eigenvalues in the vicinity of λ0. For the Gaussian ensembles, the limit of Ξ(λ0) is known;[4] thus, for GUE it is a determinantal point process with the kernel K(x,y)=sinπ(xy)π(xy) (the sine kernel).

The universality principle postulates that the limit of Ξ(λ0) as n should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on λ0). Rigorous proofs of universality are known for invariant matrix ensembles[48][49] and Wigner matrices.[50][51]

Edge statistics

Template:MainOne example of edge statistics is the Tracy–Widom distribution.

As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entries 𝒩(0,1), and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entries 𝒩(0,1/2)+i𝒩(0,1/2).

Now let Gn be sampled from the real or complex ensemble, and let ρ(Gn) be the absolute value of its maximal eigenvalue:ρ(Gn):=maxj|λj|We have the following theorem for the edge statistics:[52]

Template:Math theorem

This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of 1nGn almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about 1γn4n, and fluctuates on a scale of 14nγn, according to the Gumbel law.

Correlation functions

The joint probability density of the eigenvalues of n×n random Hermitian matrices M𝐇n×n, with partition functions of the form Zn=M𝐇n×ndμ0(M)etr(V(M)) where V(x):=j=1vjxj and dμ0(M) is the standard Lebesgue measure on the space 𝐇n×n of Hermitian n×n matrices, is given by pn,V(x1,,xn)=1Zn,Vi<j(xixj)2eiV(xi). The k-point correlation functions (or marginal distributions) are defined as Rn,V(k)(x1,,xk)=n!(nk)!𝐑dxk+1dxnpn,V(x1,x2,,xn), which are skew symmetric functions of their variables. In particular, the one-point correlation function, or density of states, is Rn,V(1)(x1)=ndx2𝐑dxnpn,V(x1,x2,,xn). Its integral over a Borel set B𝐑 gives the expected number of eigenvalues contained in B: BRn,V(1)(x)dx=𝐄(#{eigenvalues in B}).

The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs (xi,xj) of points appearing within the correlator.

Theorem [Dyson-Mehta] For any k, 1kn the k-point correlation function Rn,V(k) can be written as a determinant Rn,V(k)(x1,x2,,xk)=det1i,jk(Kn,V(xi,xj)), where Kn,V(x,y) is the nth Christoffel-Darboux kernel Kn,V(x,y):=k=0n1ψk(x)ψk(y), associated to V, written in terms of the quasipolynomials ψk(x)=1hkpk(z)eV(z)/2, where {pk(x)}k𝐍 is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions 𝐑ψj(x)ψk(x)dx=δjk.

Other classes of random matrices

Wishart matrices

Template:Main

Wishart matrices are n × n random matrices of the form Template:Math, where X is an n × m random matrix (m ≥ n) with independent entries, and X* is its conjugate transpose. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).

The limit of the empirical spectral measure of Wishart matrices was found[40] by Vladimir Marchenko and Leonid Pastur.

Random unitary matrices

Template:Main

Non-Hermitian random matrices

Template:Main

Selected bibliography

Books

Survey articles

Historic works

References

Template:Reflist

Template:Matrix classes Template:Authority control

  1. 1.0 1.1 Template:Cite journal
  2. 2.0 2.1 Template:Cite report
  3. 3.0 3.1 Template:Cite journal
  4. 4.0 4.1 Template:Harvnb
  5. Template:Cite journal
  6. Template:Cite journal
  7. Template:Cite journal
  8. Template:Cite journal
  9. Template:Cite journal
  10. Template:Cite journal
  11. Template:Cite journal
  12. Template:Cite journal
  13. Template:Cite journal
  14. Template:Cite journal
  15. Template:Harvnb
  16. Template:Cite journal
  17. Template:Cite journal
  18. Template:Cite arXiv
  19. Template:Harvnb
  20. Template:Harvnb
  21. Template:Cite journal
  22. Mingo, James A.; Speicher, Roland (2017): Free Probability and Random Matrices. Fields Institute Monographs, Vol. 35, Springer, New York
  23. Voiculescu, Dan (1991): "Limit laws for random matrices and free products". Inventiones mathematicae 104.1: 201-220
  24. Template:Cite journal
  25. Template:Cite journal
  26. Template:Cite journal
  27. Template:Cite journal
  28. Template:Cite journal
  29. Template:Cite journal
  30. Template:Cite journal
  31. Template:Cite book
  32. Template:Cite journal
  33. Template:Cite journal
  34. Template:Cite web
  35. Template:Cite journal
  36. Template:Citation
  37. Template:Cite journal
  38. Template:Citation
  39. Template:Cite arXiv
  40. 40.0 40.1 .Template:Cite journal
  41. Template:Harvnb
  42. Template:Cite journal
  43. Template:Cite journal
  44. Template:Cite journal
  45. Template:Cite book
  46. Template:Cite journal
  47. 47.0 47.1 Template:Cite journal
  48. Template:Cite journal
  49. Template:Cite journal
  50. Template:Cite journal
  51. Template:Cite journal
  52. Template:Cite journal