Prékopa–Leindler inequality

From testwiki
Jump to navigation Jump to search

In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.[1][2]

Statement of the inequality

Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy

Template:NumBlk

for all x and y in Rn. Then

h1:=nh(x)dx(nf(x)dx)1λ(ng(x)dx)λ=:f11λg1λ.

Essential form of the inequality

Recall that the essential supremum of a measurable function f : Rn → R is defined by

esssupxnf(x)=inf{t[,+]f(x)t for almost all xn}.

This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let

s(x)=esssupynf(xy1λ)1λg(yλ)λ.

Then s is measurable and

s1f11λg1λ.

The essential supremum form was given by Herm Brascamp and Elliott Lieb.[3] Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.

Relationship to the Brunn–Minkowski inequality

It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then

μ((1λ)A+λB)μ(A)1λμ(B)λ,

where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used[4] to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then

μ((1λ)A+λB)1/n(1λ)μ(A)1/n+λμ(B)1/n.

Applications in probability and statistics

Log-concave distributions

The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if X,Y have pdf f,g, and X,Y are independent, then fg is the pdf of X+Y, we also have that the convolution of two log-concave functions is log-concave.

Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have

Template:NumBlk

and let M(y) denote the marginal distribution obtained by integrating over x:

M(y)=mH(x,y)dx.

Let y1, y2Rn and 0 < λ < 1 be given. Then equation (Template:EquationNote) satisfies condition (Template:EquationNote) with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as

M((1λ)y1+λy2)M(y1)1λM(y2)λ,

which is the definition of log-concavity for M.

To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + YX − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + YX − Y), we conclude that X + Y has a log-concave distribution.

Applications to concentration of measure

The Prékopa–Leindler inequality can be used to prove results about concentration of measure.

TheoremTemplate:Citation needed Let An, and set Aϵ={x:d(x,A)<ϵ}. Let γ(x) denote the standard Gaussian pdf, and μ its associated measure. Then μ(Aϵ)1eϵ2/4μ(A).

Template:Collapse top The proof of this theorem goes by way of the following lemma:

Lemma In the notation of the theorem, nexp(d(x,A)2/4)dμ1/μ(A).

This lemma can be proven from Prékopa–Leindler by taking h(x)=γ(x),f(x)=ed(x,A)24γ(x),g(x)=1A(x)γ(x) and λ=1/2. To verify the hypothesis of the inequality, h(x+y2)f(x)g(y), note that we only need to consider yA, in which case d(x,A)||xy||. This allows us to calculate:

(2π)nf(x)g(x)=exp(d(x,A)4||x||2/2||y||2/2)exp(||xy||24||x||2/2||y||2/2)=exp(||x+y2||2)=(2π)nh(x+y2)2.

Since h(x)dx=1, the PL-inequality immediately gives the lemma.

To conclude the concentration inequality from the lemma, note that on nAϵ, d(x,A)>ϵ, so we have nexp(d(x,A)2/4)dμ(1μ(Aϵ))exp(ϵ2/4). Applying the lemma and rearranging proves the result. Template:Collapse bottom

References

Template:Reflist

Further reading

Template:Navbox Template:Measure theory