Covariance matrix

From testwiki
Jump to navigation Jump to search

Template:Use American English Template:Short description Template:Confuse

A bivariate Gaussian probability density function centered at (0, 0), with covariance matrix given by [10.50.51]
Sample points from a bivariate Gaussian distribution with a standard deviation of 3 in roughly the lower leftโ€“upper right direction and of 1 in the orthogonal direction. Because the x and y components co-vary, the variances of x and y do not fully describe the distribution. A 2ร—2 covariance matrix is needed; the directions of the arrows correspond to the eigenvectors of this covariance matrix and their lengths to the square roots of the eigenvalues.

Template:Correlation and covariance

In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or varianceโ€“covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector.

Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2ร—2 matrix would be necessary to fully characterize the two-dimensional variation.

Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself).

The covariance matrix of a random vector ๐— is typically denoted by K๐—๐—, ฮฃ or S.

Definition

Throughout this article, boldfaced unsubscripted ๐— and ๐˜ are used to refer to random vectors, and Roman subscripted Xi and Yi are used to refer to scalar random variables.

If the entries in the column vector ๐—=(X1,X2,,Xn)๐–ณ are random variables, each with finite variance and expected value, then the covariance matrix K๐—๐— is the matrix whose (i,j) entry is the covariance[1]Template:Rp KXiXj=cov[Xi,Xj]=E[(Xiโˆ’E[Xi])(Xjโˆ’E[Xj])] where the operator E denotes the expected value (mean) of its argument.

Conflicting nomenclatures and notations

Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications,[2] call the matrix K๐—๐— the variance of the random vector ๐—, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector ๐—. var(๐—)=cov(๐—,๐—)=E[(๐—โˆ’E[๐—])(๐—โˆ’E[๐—])๐–ณ].

Both forms are quite standard, and there is no ambiguity between them. The matrix K๐—๐— is also often called the variance-covariance matrix, since the diagonal terms are in fact variances.

By comparison, the notation for the cross-covariance matrix between two vectors is cov(๐—,๐˜)=K๐—๐˜=E[(๐—โˆ’E[๐—])(๐˜โˆ’E[๐˜])๐–ณ].

Properties

Relation to the autocorrelation matrix

The auto-covariance matrix K๐—๐— is related to the autocorrelation matrix R๐—๐— by K๐—๐—=E[(๐—โˆ’E[๐—])(๐—โˆ’E[๐—])๐–ณ]=R๐—๐—โˆ’E[๐—]E[๐—]๐–ณ where the autocorrelation matrix is defined as R๐—๐—=E[๐—๐—๐–ณ].

Relation to the correlation matrix

Template:Further An entity closely related to the covariance matrix is the matrix of Pearson product-moment correlation coefficients between each of the random variables in the random vector ๐—, which can be written as corr(๐—)=(diag(K๐—๐—))โˆ’12K๐—๐—(diag(K๐—๐—))โˆ’12, where diag(K๐—๐—) is the matrix of the diagonal elements of K๐—๐— (i.e., a diagonal matrix of the variances of Xi for i=1,,n).

Equivalently, the correlation matrix can be seen as the covariance matrix of the standardized random variables Xi/ฯƒ(Xi) for i=1,,n. corr(๐—)=[1E[(X1โˆ’ฮผ1)(X2โˆ’ฮผ2)]ฯƒ(X1)ฯƒ(X2)โ‹ฏE[(X1โˆ’ฮผ1)(Xnโˆ’ฮผn)]ฯƒ(X1)ฯƒ(Xn)E[(X2โˆ’ฮผ2)(X1โˆ’ฮผ1)]ฯƒ(X2)ฯƒ(X1)1โ‹ฏE[(X2โˆ’ฮผ2)(Xnโˆ’ฮผn)]ฯƒ(X2)ฯƒ(Xn)โ‹ฎโ‹ฎโ‹ฑโ‹ฎE[(Xnโˆ’ฮผn)(X1โˆ’ฮผ1)]ฯƒ(Xn)ฯƒ(X1)E[(Xnโˆ’ฮผn)(X2โˆ’ฮผ2)]ฯƒ(Xn)ฯƒ(X2)โ‹ฏ1].

Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Each off-diagonal element is between โˆ’1 and +1 inclusive.

Inverse of the covariance matrix

The inverse of this matrix, K๐—๐—โˆ’1, if it exists, is the inverse covariance matrix (or inverse concentration matrixTemplate:Dubious), also known as the precision matrix (or concentration matrix).[3]

Just as the covariance matrix can be written as the rescaling of a correlation matrix by the marginal variances: cov(๐—)=[ฯƒx10ฯƒx2โ‹ฑ0ฯƒxn][1ฯx1,x2โ‹ฏฯx1,xnฯx2,x11โ‹ฏฯx2,xnโ‹ฎโ‹ฎโ‹ฑโ‹ฎฯxn,x1ฯxn,x2โ‹ฏ1][ฯƒx10ฯƒx2โ‹ฑ0ฯƒxn]

So, using the idea of partial correlation, and partial variance, the inverse covariance matrix can be expressed analogously: cov(๐—)โˆ’1=[1ฯƒx1|x2...01ฯƒx2|x1,x3...โ‹ฑ01ฯƒxn|x1...xnโˆ’1][1โˆ’ฯx1,x2โˆฃx3...โ‹ฏโˆ’ฯx1,xnโˆฃx2...xnโˆ’1โˆ’ฯx2,x1โˆฃx3...1โ‹ฏโˆ’ฯx2,xnโˆฃx1,x3...xnโˆ’1โ‹ฎโ‹ฎโ‹ฑโ‹ฎโˆ’ฯxn,x1โˆฃx2...xnโˆ’1โˆ’ฯxn,x2โˆฃx1,x3...xnโˆ’1โ‹ฏ1][1ฯƒx1|x2...01ฯƒx2|x1,x3...โ‹ฑ01ฯƒxn|x1...xnโˆ’1] This duality motivates a number of other dualities between marginalizing and conditioning for Gaussian random variables.

Basic properties

For K๐—๐—=var(๐—)=E[(๐—โˆ’E[๐—])(๐—โˆ’E[๐—])๐–ณ] and ๐๐—=E[๐—], where ๐—=(X1,โ€ฆ,Xn)๐–ณ is an n-dimensional random variable, the following basic properties apply:[4]

  1. K๐—๐—=E(๐—๐—๐–ณ)โˆ’๐๐—๐๐—๐–ณ
  2. K๐—๐— is positive-semidefinite, i.e. ๐šTK๐—๐—๐šโ‰ฅ0for all ๐šโˆˆโ„n

Template:Hidden

  1. K๐—๐— is symmetric, i.e. K๐—๐—๐–ณ=K๐—๐—
  2. For any constant (i.e. non-random) mร—n matrix ๐€ and constant mร—1 vector ๐š, one has var(๐€๐—+๐š)=๐€var(๐—)๐€๐–ณ
  3. If ๐˜ is another random vector with the same dimension as ๐—, then var(๐—+๐˜)=var(๐—)+cov(๐—,๐˜)+cov(๐˜,๐—)+var(๐˜) where cov(๐—,๐˜) is the cross-covariance matrix of ๐— and ๐˜.

Block matrices

The joint mean ๐ and joint covariance matrix ๐œฎ of ๐— and ๐˜ can be written in block form ๐=[๐X๐Y],๐œฎ=[K๐—๐—K๐—๐˜K๐˜๐—K๐˜๐˜] where K๐—๐—=var(๐—), K๐˜๐˜=var(๐˜) and K๐—๐˜=K๐˜๐—๐–ณ=cov(๐—,๐˜).

K๐—๐— and K๐˜๐˜ can be identified as the variance matrices of the marginal distributions for ๐— and ๐˜ respectively.

If ๐— and ๐˜ are jointly normally distributed, ๐—,๐˜โˆผ ๐’ฉ(๐,๐œฎ), then the conditional distribution for ๐˜ given ๐— is given by[5] ๐˜โˆฃ๐—โˆผ ๐’ฉ(๐๐˜|๐—,K๐˜|๐—), defined by conditional mean ๐๐˜|๐—=๐๐˜+K๐˜๐—K๐—๐—โˆ’1(๐—โˆ’๐๐—) and conditional variance K๐˜|๐—=K๐˜๐˜โˆ’K๐˜๐—K๐—๐—โˆ’1K๐—๐˜.

The matrix K๐˜๐—K๐—๐—โˆ’1 is known as the matrix of regression coefficients, while in linear algebra K๐˜|๐— is the Schur complement of K๐—๐— in ๐œฎ.

The matrix of regression coefficients may often be given in transpose form, K๐—๐—โˆ’1K๐—๐˜, suitable for post-multiplying a row vector of explanatory variables ๐—๐–ณ rather than pre-multiplying a column vector ๐—. In this form they correspond to the coefficients obtained by inverting the matrix of the normal equations of ordinary least squares (OLS).

Partial covariance matrix

A covariance matrix with all non-zero elements tells us that all the individual random variables are interrelated. This means that the variables are not only directly correlated, but also correlated via other variables indirectly. Often such indirect, common-mode correlations are trivial and uninteresting. They can be suppressed by calculating the partial covariance matrix, that is the part of covariance matrix that shows only the interesting part of correlations.

If two vectors of random variables ๐— and ๐˜ are correlated via another vector ๐ˆ, the latter correlations are suppressed in a matrix[6] K๐—๐˜โˆฃ๐ˆ=pcov(๐—,๐˜โˆฃ๐ˆ)=cov(๐—,๐˜)โˆ’cov(๐—,๐ˆ)cov(๐ˆ,๐ˆ)โˆ’1cov(๐ˆ,๐˜). The partial covariance matrix K๐—๐˜โˆฃ๐ˆ is effectively the simple covariance matrix K๐—๐˜ as if the uninteresting random variables ๐ˆ were held constant.

Standard deviation matrix

Template:Main

The standard deviation matrix ๐’ is the extension of the standard deviation to multiple dimensions. It is the symmetric square root of the covariance matrix ๐œฎ. [7]

Covariance matrix as a parameter of a distribution

If a column vector ๐— of n possibly correlated random variables is jointly normally distributed, or more generally elliptically distributed, then its probability density function f(๐—) can be expressed in terms of the covariance matrix ๐œฎ as follows[6] f(๐—)=(2ฯ€)โˆ’n/2|๐œฎ|โˆ’1/2exp(โˆ’12(๐—โˆ’๐)๐–ณ๐œฎโˆ’๐Ÿ(๐—โˆ’๐)), where ๐=E[๐—] and |๐œฎ| is the determinant of ๐œฎ.

Covariance matrix as a linear operator

Template:Main Applied to one vector, the covariance matrix maps a linear combination c of the random variables X onto a vector of covariances with those variables: ๐œ๐–ณฮฃ=cov(๐œ๐–ณ๐—,๐—). Treated as a bilinear form, it yields the covariance between the two linear combinations: ๐๐–ณ๐œฎ๐œ=cov(๐๐–ณ๐—,๐œ๐–ณ๐—). The variance of a linear combination is then ๐œ๐–ณ๐œฎ๐œ, its covariance with itself.

Similarly, the (pseudo-)inverse covariance matrix provides an inner product โŸจcโˆ’ฮผ|ฮฃ+|cโˆ’ฮผโŸฉ, which induces the Mahalanobis distance, a measure of the "unlikelihood" of c.Template:Citation needed

Which matrices are covariance matrices?

From basic property 4. above, let ๐› be a (pร—1) real-valued vector, then var(๐›๐–ณ๐—)=๐›๐–ณvar(๐—)๐›, which must always be nonnegative, since it is the variance of a real-valued random variable, so a covariance matrix is always a positive-semidefinite matrix.

The above argument can be expanded as follows:w๐–ณE[(๐—โˆ’E[๐—])(๐—โˆ’E[๐—])๐–ณ]w=E[w๐–ณ(๐—โˆ’E[๐—])(๐—โˆ’E[๐—])๐–ณw]=E[(w๐–ณ(๐—โˆ’E[๐—]))2]โ‰ฅ0,where the last inequality follows from the observation that w๐–ณ(๐—โˆ’E[๐—]) is a scalar.

Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, suppose M is a pร—p symmetric positive-semidefinite matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, which can be denoted by M1/2. Let ๐— be any pร—1 column vector-valued random variable whose covariance matrix is the pร—p identity matrix. Then var(๐Œ1/2๐—)=๐Œ1/2var(๐—)๐Œ1/2=๐Œ.

Complex random vectors

Template:Further

The variance of a complex scalar-valued random variable with expected value ฮผ is conventionally defined using complex conjugation: var(Z)=E[(Zโˆ’ฮผZ)(Zโˆ’ฮผZ)โ€พ], where the complex conjugate of a complex number z is denoted zโ€พ; thus the variance of a complex random variable is a real number.

If ๐™=(Z1,โ€ฆ,Zn)๐–ณ is a column vector of complex-valued random variables, then the conjugate transpose ๐™๐–ง is formed by both transposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix called the covariance matrix, as its expectation:[8]Template:Rp K๐™๐™=cov[๐™,๐™]=E[(๐™โˆ’๐๐™)(๐™โˆ’๐๐™)๐–ง], The matrix so obtained will be Hermitian positive-semidefinite,[9] with real numbers in the main diagonal and complex numbers off-diagonal.

Properties

Pseudo-covariance matrixTemplate:Anchor

For complex random vectors, another kind of second central moment, the pseudo-covariance matrix (also called relation matrix) is defined as follows: J๐™๐™=cov[๐™,๐™โ€พ]=E[(๐™โˆ’๐๐™)(๐™โˆ’๐๐™)๐–ณ]

In contrast to the covariance matrix defined above, Hermitian transposition gets replaced by transposition in the definition. Its diagonal elements may be complex valued; it is a complex symmetric matrix.

Estimation

Template:Main If ๐Œ๐— and ๐Œ๐˜ are centered data matrices of dimension pร—n and qร—n respectively, i.e. with n columns of observations of p and q rows of variables, from which the row means have been subtracted, then, if the row means were estimated from the data, sample covariance matrices ๐๐—๐— and ๐๐—๐˜ can be defined to be ๐๐—๐—=1nโˆ’1๐Œ๐—๐Œ๐—๐–ณ,๐๐—๐˜=1nโˆ’1๐Œ๐—๐Œ๐˜๐–ณ or, if the row means were known a priori, ๐๐—๐—=1n๐Œ๐—๐Œ๐—๐–ณ,๐๐—๐˜=1n๐Œ๐—๐Œ๐˜๐–ณ.

These empirical sample covariance matrices are the most straightforward and most often used estimators for the covariance matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties.

Applications

The covariance matrix is a useful tool in many different areas. From it a transformation matrix can be derived, called a whitening transformation, that allows one to completely decorrelate the data [10] or, from a different point of view, to find an optimal basis for representing the data in a compact wayTemplate:Citation needed (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal component analysis (PCA) and the Karhunenโ€“Loรจve transform (KL-transform).

The covariance matrix plays a key role in financial economics, especially in portfolio theory and its mutual fund separation theorem and in the capital asset pricing model. The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.

Use in optimization

The evolution strategy, a particular family of Randomized Search Heuristics, fundamentally relies on a covariance matrix in its mechanism. The characteristic mutation operator draws the update step from a multivariate normal distribution using an evolving covariance matrix. There is a formal proof that the evolution strategy's covariance matrix adapts to the inverse of the Hessian matrix of the search landscape, up to a scalar factor and small random fluctuations (proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation).[11] Intuitively, this result is supported by the rationale that the optimal covariance distribution can offer mutation steps whose equidensity probability contours match the level sets of the landscape, and so they maximize the progress rate.

Covariance mapping

In covariance mapping the values of the cov(๐—,๐˜) or pcov(๐—,๐˜โˆฃ๐ˆ) matrix are plotted as a 2-dimensional map. When vectors ๐— and ๐˜ are discrete random functions, the map shows statistical relations between different regions of the random functions. Statistically independent regions of the functions show up on the map as zero-level flatland, while positive or negative correlations show up, respectively, as hills or valleys.

In practice the column vectors ๐—,๐˜, and ๐ˆ are acquired experimentally as rows of n samples, e.g. [๐—1,๐—2,,๐—n]=[X1(t1)X2(t1)โ‹ฏXn(t1)X1(t2)X2(t2)โ‹ฏXn(t2)โ‹ฎโ‹ฎโ‹ฑโ‹ฎX1(tm)X2(tm)โ‹ฏXn(tm)], where Xj(ti) is the i-th discrete value in sample j of the random function X(t). The expected values needed in the covariance formula are estimated using the sample mean, e.g. โŸจ๐—โŸฉ=1nโˆ‘j=1n๐—j and the covariance matrix is estimated by the sample covariance matrix cov(๐—,๐˜)โ‰ˆโŸจ๐—๐˜๐–ณโŸฉโˆ’โŸจ๐—โŸฉโŸจ๐˜๐–ณโŸฉ, where the angular brackets denote sample averaging as before except that the Bessel's correction should be made to avoid bias. Using this estimation the partial covariance matrix can be calculated as pcov(๐—,๐˜โˆฃ๐ˆ)=cov(๐—,๐˜)โˆ’cov(๐—,๐ˆ)(cov(๐ˆ,๐ˆ)โˆ–cov(๐ˆ,๐˜)), where the backslash denotes the left matrix division operator, which bypasses the requirement to invert a matrix and is available in some computational packages such as Matlab.[12]

Figure 1: Construction of a partial covariance map of N2 molecules undergoing Coulomb explosion induced by a free-electron laser.[13] Panels a and b map the two terms of the covariance matrix, which is shown in panel c. Panel d maps common-mode correlations via intensity fluctuations of the laser. Panel e maps the partial covariance matrix that is corrected for the intensity fluctuations. Panel f shows that 10% overcorrection improves the map and makes ion-ion correlations clearly visible. Owing to momentum conservation these correlations appear as lines approximately perpendicular to the autocorrelation line (and to the periodic modulations which are caused by detector ringing).

Fig. 1 illustrates how a partial covariance map is constructed on an example of an experiment performed at the FLASH free-electron laser in Hamburg.[13] The random function X(t) is the time-of-flight spectrum of ions from a Coulomb explosion of nitrogen molecules multiply ionised by a laser pulse. Since only a few hundreds of molecules are ionised at each laser pulse, the single-shot spectra are highly fluctuating. However, collecting typically m=104 such spectra, ๐—j(t), and averaging them over j produces a smooth spectrum โŸจ๐—(t)โŸฉ, which is shown in red at the bottom of Fig. 1. The average spectrum โŸจ๐—โŸฉ reveals several nitrogen ions in a form of peaks broadened by their kinetic energy, but to find the correlations between the ionisation stages and the ion momenta requires calculating a covariance map.

In the example of Fig. 1 spectra ๐—j(t) and ๐˜j(t) are the same, except that the range of the time-of-flight t differs. Panel a shows โŸจ๐—๐˜๐–ณโŸฉ, panel b shows โŸจ๐—โŸฉโŸจ๐˜๐–ณโŸฉ and panel c shows their difference, which is cov(๐—,๐˜) (note a change in the colour scale). Unfortunately, this map is overwhelmed by uninteresting, common-mode correlations induced by laser intensity fluctuating from shot to shot. To suppress such correlations the laser intensity Ij is recorded at every shot, put into ๐ˆ and pcov(๐—,๐˜โˆฃ๐ˆ) is calculated as panels d and e show. The suppression of the uninteresting correlations is, however, imperfect because there are other sources of common-mode fluctuations than the laser intensity and in principle all these sources should be monitored in vector ๐ˆ. Yet in practice it is often sufficient to overcompensate the partial covariance correction as panel f shows, where interesting correlations of ion momenta are now clearly visible as straight lines centred on ionisation stages of atomic nitrogen.

Two-dimensional infrared spectroscopy

Two-dimensional infrared spectroscopy employs correlation analysis to obtain 2D spectra of the condensed phase. There are two versions of this analysis: synchronous and asynchronous. Mathematically, the former is expressed in terms of the sample covariance matrix and the technique is equivalent to covariance mapping.[14]

See also

References

Template:Reflist

Further reading

Template:Statistics Template:Matrix classes

  1. โ†‘ 1.0 1.1 1.2 Template:Cite book
  2. โ†‘ Template:Cite book
  3. โ†‘ Template:Cite book
  4. โ†‘ Template:Cite web
  5. โ†‘ Template:Cite book
  6. โ†‘ 6.0 6.1 W J Krzanowski "Principles of Multivariate Analysis" (Oxford University Press, New York, 1988), Chap. 14.4; K V Mardia, J T Kent and J M Bibby "Multivariate Analysis (Academic Press, London, 1997), Chap. 6.5.3; T W Anderson "An Introduction to Multivariate Statistical Analysis" (Wiley, New York, 2003), 3rd ed., Chaps. 2.5.1 and 4.3.1.
  7. โ†‘ Template:Cite arXiv
  8. โ†‘ Template:Cite book
  9. โ†‘ Template:Cite web
  10. โ†‘ Template:Cite journal
  11. โ†‘ Template:Cite journal
  12. โ†‘ L J Frasinski "Covariance mapping techniques" J. Phys. B: At. Mol. Opt. Phys. 49 152004 (2016), Template:Doi
  13. โ†‘ 13.0 13.1 O Kornilov, M Eckstein, M Rosenblatt, C P Schulz, K Motomura, A Rouzรฉe, J Klei, L Foucar, M Siano, A Lรผbcke, F. Schapper, P Johnsson, D M P Holland, T Schlatholter, T Marchenko, S Dรผsterer, K Ueda, M J J Vrakking and L J Frasinski "Coulomb explosion of diatomic molecules in intense XUV fields mapped by partial covariance" J. Phys. B: At. Mol. Opt. Phys. 46 164028 (2013), Template:Doi
  14. โ†‘ Template:Cite journal