Extensions of Fisher's method

From testwiki
Jump to navigation Jump to search

In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.

Dependent statistics

A principal limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.

Known covariance

Brown's method

Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution with 2k degrees of freedom:[1][2]

X=2i=1kloge(pi)χ2(2k).

In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ2-distribution, 2(k’), with k’ degrees of freedom.

The mean and variance of this scaled χ2 variable are:

E[cχ2(k)]=ck,
Var[cχ2(k)]=2c2k.

where c=Var(X)/(2E[X]) and k=2(E[X])2/Var(X). This approximation is shown to be accurate up to two moments.

Unknown covariance

Harmonic mean p-value

Template:Main The harmonic mean p-value offers an alternative to Fisher's method for combining p-values when the dependency structure is unknown but the tests cannot be assumed to be independent.[3][4]

Kost's method: t approximation

This method requires the test statistics' covariance structure to be known up to a scalar multiplicative constant.[2]

Cauchy combination test

This is conceptually similar to Fisher's method: it computes a sum of transformed p-values. Unlike Fisher's method, which uses a log transformation to obtain a test statistic which has a chi-squared distribution under the null, the Cauchy combination test uses a tan transformation to obtain a test statistic whose tail is asymptotic to that of a Cauchy distribution under the null. The test statistic is:

X=i=1kωitan[(0.5pi)π],

where ωi are non-negative weights, subject to i=1kωi=1. Under the null, pi are uniformly distributed, therefore tan[(0.5pi)π] are Cauchy distributed. Under some mild assumptions, but allowing for arbitrary dependency between the pi, the tail of the distribution of X is asymptotic to that of a Cauchy distribution. More precisely, letting W denote a standard Cauchy random variable:

limtP[X>t]P[W>t]=1.

This leads to a combined hypothesis test, in which X is compared to the quantiles of the Cauchy distribution.[5]

References

Template:Reflist