Multinomial test

From testwiki
Jump to navigation Jump to search

Multinomial test is the statistical test of the null hypothesis that the parameters of a multinomial distribution equal specified values; it is used for categorical data.[1]

Beginning with a sample of N items each of which has been observed to fall into one of k categories. It is possible to define 𝐱=(x1,x2,,xk) as the observed numbers of items in each cell. Hence i=1kxi=N.

Next, defining a vector of parameters H0:π=(π1,π2,,πk), where: i=1kπi=1. These are the parameter values under the null hypothesis.

The exact probability of the observed configuration 𝐱 under the null hypothesis is given by

β„™(𝐱)0=N!i=1kπixixi!.

The significance probability for the test is the probability of occurrence of the data set observed, or of a data set less likely than that observed, if the null hypothesis is true. Using an exact test, this is calculated as

p[π“ˆπ’Ύβ„Š]=𝐲:β„™(𝐲)β„™(𝐱)0β„™(𝐲)

where the sum ranges over all outcomes as likely as, or less likely than, that observed. In practice this becomes computationally onerous as k and N increase so it is probably only worth using exact tests for small samples. For larger samples, asymptotic approximations are accurate enough and easier to calculate.

One of these approximations is the likelihood ratio. An alternative hypothesis can be defined under which each value πi is replaced by its maximum likelihood estimate pi=xiN. The exact probability of the observed configuration 𝐱 under the alternative hypothesis is given by

β„™(𝐱)A=N!i=1kpixixi!.

The natural logarithm of the likelihood ratio, [β„’β„›], between these two probabilities, multiplied by 2, is then the statistic for the likelihood ratio test

2ln([β„’β„›])=2i=1kxiln(πipi).

(The factor 2 is chosen to make the statistic asymptotically chi-squared distributed, for convenient comparison to a familiar statistic commonly used for the same application.)

If the null hypothesis is true, then as N increases, the distribution of 2ln([β„’β„›]) converges to that of chi-squared with k1 degrees of freedom. However it has long been known (e.g. Lawley[2]) that for finite sample sizes, the moments of 2ln([β„’β„›]) are greater than those of chi-squared, thus inflating the probability of type I errors (false positives). The difference between the moments of chi-squared and those of the test statistic are a function of N1. Williams[3] showed that the first moment can be matched as far as N2 if the test statistic is divided by a factor given by

q1=1+i=1kπi116N(k1).

In the special case where the null hypothesis is that all the values πi are equal to 1/k (i.e. it stipulates a uniform distribution), this simplifies to

q1=1+k+16N.

Subsequently, Smith et al.[4] derived a dividing factor which matches the first moment as far as N3. For the case of equal values of πi, this factor is

q2=1+k+16N+k26N2.

The null hypothesis can also be tested by using Pearson's chi-squared test

χ2=i=1k(xiEi)2Ei

where Ei=Nπi is the expected number of cases in category i under the null hypothesis. This statistic also converges to a chi-squared distribution with k1 degrees of freedom when the null hypothesis is true but does so from below, as it were, rather than from above as 2ln([β„’β„›]) does, so may be preferable to the uncorrected version of 2ln([β„’β„›]) for small samples.Template:Citation needed

References

Template:Reflist

  1. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named Read-Cressie-1988
  2. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named Lawley-1956
  3. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named Williams-1976
  4. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named Smith-Rae-Manderscheid-Manderscheid-1981