Information matrix test

From testwiki
Jump to navigation Jump to search

In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White,[1] who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient, or as a function of the Hessian matrix of the log-likelihood function.

Consider a linear model ๐ฒ=๐—β+๐ฎ, where the errors ๐ฎ are assumed to be distributed N(0,σ2๐ˆ). If the parameters β and σ2 are stacked in the vector θ๐–ณ=[βσ2], the resulting log-likelihood function is

(θ)=n2logσ212σ2(๐ฒ๐—β)๐–ณ(๐ฒ๐—β)

The information matrix can then be expressed as

๐ˆ(θ)=E[((θ)θ)((θ)θ)๐–ณ]

that is the expected value of the outer product of the gradient or score. Second, it can be written as the negative of the Hessian matrix of the log-likelihood function

๐ˆ(θ)=E[2(θ)θθ๐–ณ]

If the model is correctly specified, both expressions should be equal. Combining the equivalent forms yields

Δ(θ)=i=1n[2(θ)θθ๐–ณ+(θ)θ(θ)θ]

where Δ(θ) is an (r×r) random matrix, where r is the number of parameters. White showed that the elements of n1/2Δ(θ^), where θ^ is the MLE, are asymptotically normally distributed with zero means when the model is correctly specified.[2] In small samples, however, the test generally performs poorly.[3]

References

Template:Reflist

Further reading