Hotelling's T-squared distribution

From testwiki
Jump to navigation Jump to search

Template:Short description

Template:Redirect Template:Probability distribution

In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling,[1] is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution. The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.[2]

Motivation

The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a t-test. The distribution is named for Harold Hotelling, who developed it as a generalization of Student's t-distribution.[1]

Definition

If the vector d is Gaussian multivariate-distributed with zero mean and unit covariance matrix N(๐ŸŽp,๐ˆp,p) and M is a pร—p random matrix with a Wishart distribution W(๐ˆp,p,m) with unit scale matrix and m degrees of freedom, and d and M are independent of each other, then the quadratic form X has a Hotelling distribution (with parameters p and m):[3]

X=mdTMโˆ’1dโˆผT2(p,m).

It can be shown that if a random variable X has Hotelling's T-squared distribution, XโˆผTp,m2, then:[1]

mโˆ’p+1pmXโˆผFp,mโˆ’p+1

where Fp,mโˆ’p+1 is the F-distribution with parameters p and m − p + 1.

Hotelling t-squared statistic

Let ๐œฎ^ be the sample covariance:

๐œฎ^=1nโˆ’1โˆ‘i=1n(๐ฑiโˆ’๐ฑโ€พ)(๐ฑiโˆ’๐ฑโ€พ)

where we denote transpose by an apostrophe. It can be shown that ๐œฎ^ is a positive (semi) definite matrix and (nโˆ’1)๐œฎ^ follows a p-variate Wishart distribution with n โˆ’ 1 degrees of freedom.[4] The sample covariance matrix of the mean reads ๐œฎ^๐ฑโ€พ=๐œฎ^/n.[5]

The Hotelling's t-squared statistic is then defined as:[6]

t2=(๐ฑโ€พโˆ’๐)๐œฎ^๐ฑโ€พโˆ’1(๐ฑโ€พโˆ’๐)=n(๐ฑโ€พโˆ’๐)๐œฎ^โˆ’1(๐ฑโ€พโˆ’๐),

which is proportional to the Mahalanobis distance between the sample mean and ๐. Because of this, one should expect the statistic to assume low values if ๐ฑโ€พโ‰ˆ๐, and high values if they are different.

From the distribution,

t2โˆผTp,nโˆ’12=p(nโˆ’1)nโˆ’pFp,nโˆ’p,

where Fp,nโˆ’p is the F-distribution with parameters p and n โˆ’ p.

In order to calculate a p-value (unrelated to p variable here), note that the distribution of t2 equivalently implies that

nโˆ’pp(nโˆ’1)t2โˆผFp,nโˆ’p.

Then, use the quantity on the left hand side to evaluate the p-value corresponding to the sample, which comes from the F-distribution. A confidence region may also be determined using similar logic.

Motivation

Template:Further

Let ๐’ฉp(๐,๐œฎ) denote a p-variate normal distribution with location ๐ and known covariance ๐œฎ. Let

๐ฑ1,,๐ฑnโˆผ๐’ฉp(๐,๐œฎ)

be n independent identically distributed (iid) random variables, which may be represented as pร—1 column vectors of real numbers. Define

๐ฑโ€พ=๐ฑ1+โ‹ฏ+๐ฑnn

to be the sample mean with covariance ๐œฎ๐ฑโ€พ=๐œฎ/n. It can be shown that

(๐ฑโ€พโˆ’๐)๐œฎ๐ฑโ€พโˆ’1(๐ฑโ€พโˆ’๐)โˆผฯ‡p2,

where ฯ‡p2 is the chi-squared distribution with p degrees of freedom.[7]

Template:Collapse top

Template:Math proof

Alternatively, one can argue using density functions and characteristic functions, as follows. Template:Math proof Template:Collapse bottom

Two-sample statistic

If ๐ฑ1,,๐ฑnxโˆผNp(๐,๐œฎ) and ๐ฒ1,,๐ฒnyโˆผNp(๐,๐œฎ), with the samples independently drawn from two independent multivariate normal distributions with the same mean and covariance, and we define

๐ฑโ€พ=1nxโˆ‘i=1nx๐ฑi๐ฒโ€พ=1nyโˆ‘i=1ny๐ฒi

as the sample means, and

๐œฎ^๐ฑ=1nxโˆ’1โˆ‘i=1nx(๐ฑiโˆ’๐ฑโ€พ)(๐ฑiโˆ’๐ฑโ€พ)
๐œฎ^๐ฒ=1nyโˆ’1โˆ‘i=1ny(๐ฒiโˆ’๐ฒโ€พ)(๐ฒiโˆ’๐ฒโ€พ)

as the respective sample covariance matrices. Then

๐œฎ^=(nxโˆ’1)๐œฎ^๐ฑ+(nyโˆ’1)๐œฎ^๐ฒnx+nyโˆ’2

is the unbiased pooled covariance matrix estimate (an extension of pooled variance).Template:Anchor

Finally, the Hotelling's two-sample t-squared statistic is

t2=nxnynx+ny(๐ฑโ€พโˆ’๐ฒโ€พ)๐œฎ^โˆ’1(๐ฑโ€พโˆ’๐ฒโ€พ)โˆผT2(p,nx+nyโˆ’2)

It can be related to the F-distribution by[4]

nx+nyโˆ’pโˆ’1(nx+nyโˆ’2)pt2โˆผF(p,nx+nyโˆ’1โˆ’p).

The non-null distribution of this statistic is the noncentral F-distribution (the ratio of a non-central Chi-squared random variable and an independent central Chi-squared random variable)

nx+nyโˆ’pโˆ’1(nx+nyโˆ’2)pt2โˆผF(p,nx+nyโˆ’1โˆ’p;ฮด),

with

ฮด=nxnynx+ny๐’…๐œฎโˆ’1๐’…,

where ๐’…=๐ฑโ€พโˆ’๐ฒโ€พ is the difference vector between the population means.

In the two-variable case, the formula simplifies nicely allowing appreciation of how the correlation, ฯ, between the variables affects t2. If we define

d1=xโ€พ1โˆ’yโ€พ1,d2=xโ€พ2โˆ’yโ€พ2

and

s1=ฮฃ11s2=ฮฃ22ฯ=ฮฃ12/(s1s2)=ฮฃ21/(s1s2)

then

t2=nxny(nx+ny)(1โˆ’ฯ2)[(d1s1)2+(d2s2)2โˆ’2ฯ(d1s1)(d2s2)]

Thus, if the differences in the two rows of the vector ๐=๐ฑโ€พโˆ’๐ฒโ€พ are of the same sign, in general, t2 becomes smaller as ฯ becomes more positive. If the differences are of opposite sign t2 becomes larger as ฯ becomes more positive.

A univariate special case can be found in Welch's t-test.

More robust and powerful tests than Hotelling's two-sample test have been proposed in the literature, see for example the interpoint distance based tests which can be applied also when the number of variables is comparable with, or even larger than, the number of subjects.[8][9]

See also

References

Template:Reflist

Template:ProbDistributions