Information matrix test: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Citation bot
Alter: url, chapter-url. URLs might have been anonymized. Add: doi, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine
 
(No difference)

Latest revision as of 09:26, 19 March 2023

In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White,[1] who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient, or as a function of the Hessian matrix of the log-likelihood function.

Consider a linear model 𝐲=𝐗β+𝐮, where the errors 𝐮 are assumed to be distributed N(0,σ2𝐈). If the parameters β and σ2 are stacked in the vector θ𝖳=[βσ2], the resulting log-likelihood function is

(θ)=n2logσ212σ2(𝐲𝐗β)𝖳(𝐲𝐗β)

The information matrix can then be expressed as

𝐈(θ)=E[((θ)θ)((θ)θ)𝖳]

that is the expected value of the outer product of the gradient or score. Second, it can be written as the negative of the Hessian matrix of the log-likelihood function

𝐈(θ)=E[2(θ)θθ𝖳]

If the model is correctly specified, both expressions should be equal. Combining the equivalent forms yields

Δ(θ)=i=1n[2(θ)θθ𝖳+(θ)θ(θ)θ]

where Δ(θ) is an (r×r) random matrix, where r is the number of parameters. White showed that the elements of n1/2Δ(θ^), where θ^ is the MLE, are asymptotically normally distributed with zero means when the model is correctly specified.[2] In small samples, however, the test generally performs poorly.[3]

References

Template:Reflist

Further reading