Cross-covariance matrix

From testwiki
Jump to navigation Jump to search

Template:Correlation and covariance Template:Confuse Template:Short description In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the i, j position is the covariance between the i-th element of a random vector and j-th element of another random vector. When the two random vectors are the same, the cross-covariance matrix is referred to as covariance matrix. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution. Intuitively, the cross-covariance matrix generalizes the notion of covariance to multiple dimensions.

The cross-covariance matrix of two random vectors 𝐗 and 𝐘 is typically denoted by Kπ—π˜ or Σπ—π˜.

Definition

For random vectors 𝐗 and 𝐘, each containing random elements whose expected value and variance exist, the cross-covariance matrix of 𝐗 and 𝐘 is defined by[1]Template:Rp

where μ𝐗=E[𝐗] and μ𝐘=E[𝐘] are vectors containing the expected values of 𝐗 and 𝐘. The vectors 𝐗 and 𝐘 need not have the same dimension, and either might be a scalar value.

The cross-covariance matrix is the matrix whose (i,j) entry is the covariance

KXiYj=cov[Xi,Yj]=E[(XiE[Xi])(YjE[Yj])]

between the i-th element of 𝐗 and the j-th element of 𝐘. This gives the following component-wise definition of the cross-covariance matrix.

Kπ—π˜=[E[(X1E[X1])(Y1E[Y1])]E[(X1E[X1])(Y2E[Y2])]E[(X1E[X1])(YnE[Yn])]E[(X2E[X2])(Y1E[Y1])]E[(X2E[X2])(Y2E[Y2])]E[(X2E[X2])(YnE[Yn])]E[(XmE[Xm])(Y1E[Y1])]E[(XmE[Xm])(Y2E[Y2])]E[(XmE[Xm])(YnE[Yn])]]

Example

For example, if 𝐗=(X1,X2,X3)T and 𝐘=(Y1,Y2)T are random vectors, then cov(𝐗,𝐘) is a 3×2 matrix whose (i,j)-th entry is cov(Xi,Yj).

Properties

For the cross-covariance matrix, the following basic properties apply:[2]

  1. cov(𝐗,𝐘)=E[π—π˜T]μ𝐗μ𝐘T
  2. cov(𝐗,𝐘)=cov(𝐘,𝐗)T
  3. cov(π—πŸ+π—πŸ,𝐘)=cov(π—πŸ,𝐘)+cov(π—πŸ,𝐘)
  4. cov(A𝐗+𝐚,BT𝐘+𝐛)=Acov(𝐗,𝐘)B
  5. If 𝐗 and 𝐘 are independent (or somewhat less restrictedly, if every random variable in 𝐗 is uncorrelated with every random variable in 𝐘), then cov(𝐗,𝐘)=0p×q

where 𝐗, π—πŸ and π—πŸ are random p×1 vectors, 𝐘 is a random q×1 vector, 𝐚 is a q×1 vector, 𝐛 is a p×1 vector, A and B are q×p matrices of constants, and 0p×q is a p×q matrix of zeroes.

Definition for complex random vectors

Template:Main If 𝐙 and 𝐖 are complex random vectors, the definition of the cross-covariance matrix is slightly changed. Transposition is replaced by Hermitian transposition:

K𝐙𝐖=cov(𝐙,𝐖)=def E[(𝐙μ𝐙)(𝐖μ𝐖)H]

For complex random vectors, another matrix called the pseudo-cross-covariance matrix is defined as follows:

J𝐙𝐖=cov(𝐙,𝐖)=def E[(𝐙μ𝐙)(𝐖μ𝐖)T]

Uncorrelatedness

Template:Main Two random vectors 𝐗 and 𝐘 are called uncorrelated if their cross-covariance matrix Kπ—π˜ matrix is a zero matrix.[1]Template:Rp

Complex random vectors 𝐙 and 𝐖 are called uncorrelated if their covariance matrix and pseudo-covariance matrix is zero, i.e. if K𝐙𝐖=J𝐙𝐖=0.

References

Template:Reflist