S-estimator

From testwiki
Revision as of 15:59, 15 June 2021 by imported>MurrayScience (fix ])
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The goal of S-estimators is to have a simple high-breakdown regression estimator, which share the flexibility and nice asymptotic properties of M-estimators. The name "S-estimators" was chosen as they are based on estimators of scale.

We will consider estimators of scale defined by a function ρ, which satisfy

  • R1 – ρ is symmetric, continuously differentiable and ρ(0)=0.
  • R2 – there exists c>0 such that ρ is strictly increasing on [c,]

For any sample {r1,...,rn} of real numbers, we define the scale estimate s(r1,...,rn) as the solution of

1ni=1nρ(ri/s)=K,

where K is the expectation value of ρ for a standard normal distribution. (If there are more solutions to the above equation, then we take the one with the smallest solution for s; if there is no solution, then we put s(r1,...,rn)=0 .)

Definition:

Let (x1,y1),...,(xn,yn) be a sample of regression data with p-dimensional xi. For each vector θ, we obtain residuals s(r1(θ),...,rn(θ)) by solving the equation of scale above, where ρ satisfy R1 and R2. The S-estimator θ^ is defined by

θ^=minθs(r1(θ),...,rn(θ))

and the final scale estimator σ^ is then

σ^=s(r1(θ^),...,rn(θ^)).[1]

References

  1. P. Rousseeuw and V. Yohai, Robust Regression by Means of S-estimators, from the book: Robust and nonlinear time series analysis, pages 256–272, 1984