Concentration inequality

From testwiki
Jump to navigation Jump to search

Template:Short description In probability theory, concentration inequalities provide mathematical bounds on the probability of a random variable deviating from some value (typically, its expected value). The deviation or other function of the random variable can be thought of as a secondary random variable. The simplest example of the concentration of such a secondary random variable is the CDF of the first random variable which concentrates the probability to unity. If an analytic form of the CDF is available this provides a concentration equality that provides the exact probability of concentration. It is precisely when the CDF is difficult to calculate or even the exact form of the first random variable is unknown that the applicable concentration inequalities provide useful insight.

Another almost universal example of a secondary random variable is the law of large numbers of classical probability theory which states that sums of independent random variables, under mild conditions, concentrate around their expectation with a high probability. Such sums are the most basic examples of random variables concentrated around their mean.

Concentration inequalities can be sorted according to how much information about the random variable is needed in order to use them.Template:Citation needed

Markov's inequality

Template:Main Let X be a random variable that is non-negative (almost surely). Then, for every constant a>0,

Pr(Xa)E(X)a.

Note the following extension to Markov's inequality: if Φ is a strictly increasing and non-negative function, then

Pr(Xa)=Pr(Φ(X)Φ(a))E(Φ(X))Φ(a).

Chebyshev's inequality

Template:Main Chebyshev's inequality requires the following information on a random variable X:

  • The expected value E[X] is finite.
  • The variance Var[X]=E[(XE[X])2] is finite.

Then, for every constant a>0,

Pr(|XE[X]|a)Var[X]a2,

or equivalently,

Pr(|XE[X]|aStd[X])1a2,

where Std[X] is the standard deviation of X.

Chebyshev's inequality can be seen as a special case of the generalized Markov's inequality applied to the random variable |XE[X]| with Φ(x)=x2.

Vysochanskij–Petunin inequality

Template:MainLet X be a random variable with unimodal distribution, mean μ and finite, non-zero variance σ2. Then, for any λ>83=1.63299,

Pr(|Xμ|λσ)49λ2.

(For a relatively elementary proof see e.g.[1]).

One-sided Vysochanskij–Petunin inequality

For a unimodal random variable X and r0, the one-sided Vysochanskij-Petunin inequality[2] holds as follows:

Pr(XE[X]r){49Var(X)r2+Var(X)for r253Var(X),43Var(X)r2+Var(X)13otherwise.

Paley–Zygmund inequality

Template:Main In contrast to most commonly used concentration inequalities, the Paley-Zygmund inequality provides a lower bound on the deviation probability.

Cantelli's inequality

Template:Main

Gauss's inequality

Template:Main

Chernoff bounds

Template:Main The generic Chernoff bound[3]Template:Rp requires the moment generating function of X, defined as MX(t):=E[etX]. It always exists, but may be infinite. From Markov's inequality, for every t>0:

Pr(Xa)E[etX]eta,

and for every t<0:

Pr(Xa)E[etX]eta.

There are various Chernoff bounds for different distributions and different values of the parameter t. See [4]Template:Rp for a compilation of more concentration inequalities.

Mill's inequality

Template:Main article Let ZN(0,1). ThenP(|Z|>t)2πexp(t2/2)t

Bounds on sums of independent bounded variables

Template:Main Let X1,X2,,Xn be independent random variables such that, for all i:

aiXibi almost surely.
ci:=biai
i:ciC

Let Sn be their sum, En its expected value and Vn its variance:

Sn:=i=1nXi
En:=E[Sn]=i=1nE[Xi]
Vn:=Var[Sn]=i=1nVar[Xi]

It is often interesting to bound the difference between the sum and its expected value. Several inequalities can be used.

1. Hoeffding's inequality says that:

Pr[|SnEn|>t]2exp(2t2i=1nci2)2exp(2t2nC2)

2. The random variable SnEn is a special case of a martingale, and S0E0=0. Hence, the general form of Azuma's inequality can also be used and it yields a similar bound:

Pr[|SnEn|>t]<2exp(2t2i=1nci2)<2exp(2t2nC2)

This is a generalization of Hoeffding's since it can handle other types of martingales, as well as supermartingales and submartingales. See Fan et al. (2015).[5] Note that if the simpler form of Azuma's inequality is used, the exponent in the bound is worse by a factor of 4.

3. The sum function, Sn=f(X1,,Xn), is a special case of a function of n variables. This function changes in a bounded way: if variable i is changed, the value of f changes by at most biai<C. Hence, McDiarmid's inequality can also be used and it yields a similar bound:

Pr[|SnEn|>t]<2exp(2t2i=1nci2)<2exp(2t2nC2)

This is a different generalization of Hoeffding's since it can handle other functions besides the sum function, as long as they change in a bounded way.

4. Bennett's inequality offers some improvement over Hoeffding's when the variances of the summands are small compared to their almost-sure bounds C. It says that:

Pr[|SnEn|>t]2exp[VnC2h(CtVn)], where h(u)=(1+u)log(1+u)u

5. The first of Bernstein's inequalities says that:

Pr[|SnEn|>t]<2exp(t2/2Vn+Ct/3)

This is a generalization of Hoeffding's since it can handle random variables with not only almost-sure bound but both almost-sure bound and variance bound.

6. Chernoff bounds have a particularly simple form in the case of sum of independent variables, since E[etSn]=i=1nE[etXi].

For example,[6] suppose the variables Xi satisfy XiE(Xi)aiM, for 1in. Then we have lower tail inequality:

Pr[SnEn<λ]exp(λ22(Vn+i=1nai2+Mλ/3))

If Xi satisfies XiE(Xi)+ai+M, we have upper tail inequality:

Pr[SnEn>λ]exp(λ22(Vn+i=1nai2+Mλ/3))

If Xi are i.i.d., |Xi|1 and σ2 is the variance of Xi, a typical version of Chernoff inequality is:

Pr[|Sn|kσ]2ek2/4n for 0k2σ.

7. Similar bounds can be found in: Rademacher distribution#Bounds on sums

Efron–Stein inequality

The Efron–Stein inequality (or influence inequality, or MG bound on variance) bounds the variance of a general function.

Suppose that X1Xn, X1Xn are independent with Xi and Xi having the same distribution for all i.

Let X=(X1,,Xn),X(i)=(X1,,Xi1,Xi,Xi+1,,Xn). Then

Var(f(X))12i=1nE[(f(X)f(X(i)))2].

A proof may be found in e.g.,.[7]

Bretagnolle–Huber–Carol inequality

Bretagnolle–Huber–Carol Inequality bounds the difference between a vector of multinomially distributed random variables and a vector of expected values.[8][9] A simple proof appears in [10](Appendix Section).

If a random vector (Z1,Z2,Z3,,Zn) is multinomially distributed with parameters (p1,p2,,pn) and satisfies Z1+Z2++Zn=M, then

Pr(i=1n|ZiMpi|2Mε)2ne2Mε2.

This inequality is used to bound the total variation distance.

Mason and van Zwet inequality

The Mason and van Zwet inequality[11] for multinomial random vectors concerns a slight modification of the classical chi-square statistic.

Let the random vector (N1,,Nk) be multinomially distributed with parameters n and (p1,,pk) such that pi>0 for i<k. Then for every C>0 and δ>0 there exist constants a,b,c>0, such that for all n1 and λ,p1,,pk1 satisfying λ>Cnmin{pi|1ik1} and i=1k1pi1δ, we have

Pr(i=1k1(Ninpi)2npi>λ)aebkcλ.

Dvoretzky–Kiefer–Wolfowitz inequality

Template:Main

The Dvoretzky–Kiefer–Wolfowitz inequality bounds the difference between the real and the empirical cumulative distribution function.

Given a natural number n, let X1,X2,,Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by

Fn(x)=1ni=1n𝟏{Xix},x.

So F(x) is the probability that a single random variable X is smaller than x, and Fn(x) is the average number of random variables that are smaller than x.

Then

Pr(supx(Fn(x)F(x))>ε)e2nε2 for every ε12nln2.

Anti-concentration inequalities

Anti-concentration inequalities, on the other hand, provide an upper bound on how much a random variable can concentrate, either on a specific value or range of values. A concrete example is that if you flip a fair coin n times, the probability that any given number of heads appears will be less than 1n. This idea can be greatly generalized. For example, a result of Rao and Yehudayoff[12] implies that for any β,δ>0 there exists some C>0 such that, for any k, the following is true for at least 2n(1δ) values of x{±1}n:

Pr(x,Y=k)Cn,

where Y is drawn uniformly from {±1}n.

Such inequalities are of importance in several fields, including communication complexity (e.g., in proofs of the gap Hamming problem[13]) and graph theory.[14]

An interesting anti-concentration inequality for weighted sums of independent Rademacher random variables can be obtained using the Paley–Zygmund and the Khintchine inequalities.[15]

References

Template:Reflist