Samuelson's inequality

From testwiki
Revision as of 09:26, 9 January 2025 by imported>ThaHistorico (Image. The page has no thumbnail)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description

Samuelson Inequality

In statistics, Samuelson's inequality, named after the economist Paul Samuelson,[1] also called the Laguerre–Samuelson inequality,[2][3] after the mathematician Edmond Laguerre, states that every one of any collection x1, ..., xn, is within Template:Radic uncorrected sample standard deviations of their sample mean.

Statement of the inequality

If we let

x=x1++xnn

be the sample mean and

s=1ni=1n(xix)2

be the standard deviation of the sample, then

xsn1xjx+sn1for j=1,,n.[4]

Equality holds on the left (or right) for xj if and only if all the n − 1 xis other than xj are equal to each other and greater (smaller) than xj.[2]

If you instead define s=1n1i=1n(xix)2 then the inequality xsn1xjx+sn1 still applies and can be slightly tightened to xsn1nxjx+sn1n.

Comparison to Chebyshev's inequality

Template:Main

Chebyshev's inequality locates a certain fraction of the data within certain bounds, while Samuelson's inequality locates all the data points within certain bounds.

The bounds given by Chebyshev's inequality are unaffected by the number of data points, while for Samuelson's inequality the bounds loosen as the sample size increases. Thus for large enough data sets, Chebyshev's inequality is more useful.

Applications

Template:Expand section

Samuelson’s inequality has several applications in statistics and mathematics. It is useful in the studentization of residuals which shows a rationale for why this process should be done externally to better understand the spread of residuals in regression analysis.

In matrix theory, Samuelson’s inequality is used to locate the eigenvalues of certain matrices and tensors.

Furthermore, generalizations of this inequality apply to complex data and random variables in a probability space.[5][6]

Relationship to polynomials

Samuelson was not the first to describe this relationship: the first was probably Laguerre in 1880 while investigating the roots (zeros) of polynomials.[2][7]

Consider a polynomial with all roots real:

a0xn+a1xn1++an1x+an=0

Without loss of generality let a0=1 and let

t1=xi and t2=xi2

Then

a1=xi=t1

and

a2=xixj=t12t22 where i<j

In terms of the coefficients

t2=a122a2

Laguerre showed that the roots of this polynomial were bounded by

a1/n±bn1

where

b=nt2t12n=na12a122na2n

Inspection shows that a1n is the mean of the roots and that b is the standard deviation of the roots.

Laguerre failed to notice this relationship with the means and standard deviations of the roots, being more interested in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in their location.

When the coefficients a1 and a2 are both zero no information can be obtained about the location of the roots, because not all roots are real (as can be seen from Descartes' rule of signs) unless the constant term is also zero.

References

Template:Reflist

  1. Template:Cite journal
  2. 2.0 2.1 2.2 Template:Cite thesis
  3. Template:Cite book
  4. Template:Cite book
  5. Template:Cite journal
  6. Template:Cite book
  7. Laguerre E. (1880) Mémoire pour obtenir par approximation les racines d'une équation algébrique qui a toutes les racines réelles. Nouv Ann Math 2e série, 19, 161–172, 193–202