Scoring algorithm

From testwiki
Jump to navigation Jump to search

Scoring algorithm, also known as Fisher's scoring,[1] is a form of Newton's method used in statistics to solve maximum likelihood equations numerically, named after Ronald Fisher.

Sketch of derivation

Let Y1,,Yn be random variables, independent and identically distributed with twice differentiable p.d.f. f(y;θ), and we wish to calculate the maximum likelihood estimator (M.L.E.) θ* of θ. First, suppose we have a starting point for our algorithm θ0, and consider a Taylor expansion of the score function, V(θ), about θ0:

V(θ)V(θ0)𝒥(θ0)(θθ0),

where

𝒥(θ0)=i=1n|θ=θ0logf(Yi;θ)

is the observed information matrix at θ0. Now, setting θ=θ*, using that V(θ*)=0 and rearranging gives us:

θ*θ0+𝒥1(θ0)V(θ0).

We therefore use the algorithm

θm+1=θm+𝒥1(θm)V(θm),

and under certain regularity conditions, it can be shown that θmθ*.

Fisher scoring

In practice, 𝒥(θ) is usually replaced by (θ)=E[𝒥(θ)], the Fisher information, thus giving us the Fisher Scoring Algorithm:

θm+1=θm+1(θm)V(θm)..

Under some regularity conditions, if θm is a consistent estimator, then θm+1 (the correction after a single step) is 'optimal' in the sense that its error distribution is asymptotically identical to that of the true max-likelihood estimate.[2]

See also

References

Template:Reflist

Further reading

Template:Optimization algorithms