Matrix normal distribution

From testwiki
Revision as of 15:44, 26 February 2025 by imported>Robromijnders (Remove two empty lines)
(diff) ← Older revision | Latest revision (diff) | Newer revision β†’ (diff)
Jump to navigation Jump to search

Template:Short description Template:Probability distribution

In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.

Definition

The probability density function for the random matrix X (n × p) that follows the matrix normal distribution ℳ𝒩n,p(𝐌,𝐔,𝐕) has the form:

p(π—πŒ,𝐔,𝐕)=exp(12tr[𝐕1(π—πŒ)T𝐔1(π—πŒ)])(2π)np/2|𝐕|n/2|𝐔|p/2

where tr denotes trace and M is n × p, U is n × n and V is p × p, and the density is understood as the probability density function with respect to the standard Lebesgue measure in ℝn×p, i.e.: the measure corresponding to integration with respect to dx11dx21dxn1dx12dxn2dxnp.

The matrix normal is related to the multivariate normal distribution in the following way:

𝐗ℳ𝒩n×p(𝐌,𝐔,𝐕),

if and only if

vec(𝐗)𝒩np(vec(𝐌),𝐕𝐔)

where denotes the Kronecker product and vec(𝐌) denotes the vectorization of 𝐌.

Proof

The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF:

12tr[𝐕1(π—πŒ)T𝐔1(π—πŒ)]=12vec(π—πŒ)Tvec(𝐔1(π—πŒ)𝐕1)=12vec(π—πŒ)T(𝐕1𝐔1)vec(π—πŒ)=12[vec(𝐗)vec(𝐌)]T(𝐕𝐔)1[vec(𝐗)vec(𝐌)]

which is the argument of the exponent of the multivariate normal PDF with respect to Lebesgue measure in ℝnp. The proof is completed by using the determinant property: |𝐕𝐔|=|𝐕|n|𝐔|p.

Properties

If 𝐗ℳ𝒩n×p(𝐌,𝐔,𝐕), then we have the following properties:[1][2]

Expected values

The mean, or expected value is:

E[𝐗]=𝐌

and we have the following second-order expectations:

E[(π—πŒ)(π—πŒ)T]=𝐔tr(𝐕)
E[(π—πŒ)T(π—πŒ)]=𝐕tr(𝐔)

where tr denotes trace.

More generally, for appropriately dimensioned matrices A,B,C:

E[𝐗𝐀𝐗T]=𝐔tr(𝐀T𝐕)+πŒπ€πŒTE[𝐗T𝐁𝐗]=𝐕tr(𝐔𝐁T)+𝐌T𝐁𝐌E[𝐗𝐂𝐗]=𝐕𝐂T𝐔+πŒπ‚πŒ

Transformation

Transpose transform:

𝐗Tℳ𝒩p×n(𝐌T,𝐕,𝐔)

Linear transform: let D (r-by-n), be of full rank r ≀ n and C (p-by-s), be of full rank s ≀ p, then:

𝐃𝐗𝐂ℳ𝒩r×s(πƒπŒπ‚,𝐃𝐔𝐃T,𝐂T𝐕𝐂)

Composition

The product of two matrix normal distributions

ℳ𝒩(𝐌𝟏,π”πŸ,π•πŸ)ℳ𝒩(𝐌𝟐,π”πŸ,π•πŸ)𝒩(μc,Σc)

is proportional to a normal distribution with parameters:

Σc=(V11U11+V21U21)1,
μc=Σc((V11U11)vec(M1)+(V21U21)vec(M2)).

Example

Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution:

𝐘i𝒩p(μ,Σ) with i{1,,n}.

When defining the n × p matrix 𝐗 for which the ith row is 𝐘i, we obtain:

𝐗ℳ𝒩n×p(𝐌,𝐔,𝐕)

where each row of 𝐌 is equal to μ, that is 𝐌=𝟏n×μT, 𝐔 is the n × n identity matrix, that is the rows are independent, and 𝐕=Σ.

Maximum likelihood parameter estimation

Given k matrices, each of size n Γ— p, denoted 𝐗1,𝐗2,,𝐗k, which we assume have been sampled i.i.d. from a matrix normal distribution, the maximum likelihood estimate of the parameters can be obtained by maximizing:

i=1kℳ𝒩n×p(𝐗i𝐌,𝐔,𝐕).

The solution for the mean has a closed form, namely

𝐌=1ki=1k𝐗i

but the covariance parameters do not. However, these parameters can be iteratively maximized by zero-ing their gradients at:

𝐔=1kpi=1k(𝐗i𝐌)𝐕1(𝐗i𝐌)T

and

𝐕=1kni=1k(𝐗i𝐌)T𝐔1(𝐗i𝐌),

See for example [3] and references therein. The covariance parameters are non-identifiable in the sense that for any scale factor, s>0, we have:

ℳ𝒩n×p(π—πŒ,𝐔,𝐕)=ℳ𝒩n×p(π—πŒ,s𝐔,1s𝐕).

Drawing values from the distribution

Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let 𝐗 be an n by p matrix of np independent samples from the standard normal distribution, so that

𝐗ℳ𝒩n×p(𝟎,𝐈,𝐈).

Then let

𝐘=𝐌+𝐀𝐗𝐁,

so that

π˜β„³π’©n×p(𝐌,𝐀𝐀T,𝐁T𝐁),

where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.

Relation to other distributions

Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the Wishart distribution, inverse-Wishart distribution and matrix t-distribution, but uses different notation from that employed here.

See also

References

Template:Reflist

Template:ProbDistributions