Brascamp–Lieb inequality

From testwiki
Jump to navigation Jump to search

In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space n. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.

The geometric inequality

Fix natural numbers m and n. For 1 ≤ i ≤ m, let ni ∈ N and let ci > 0 so that

i=1mcini=n.

Choose non-negative, integrable functions

fiL1(ni;[0,+])

and surjective linear maps

Bi:nni.

Then the following inequality holds:

ni=1mfi(Bix)cidxD1/2i=1m(nifi(y)dy)ci,

where D is given by

D=inf{det(i=1mciBi*AiBi)i=1m(detAi)ci|Ai is a positive-definite ni×ni matrix}.

Another way to state this is that the constant D is what one would obtain by restricting attention to the case in which each fi is a centered Gaussian function, namely fi(y)=exp{(y,Aiy)}.[1]

Alternative forms

Consider a probability density function p(x)=exp(ϕ(x)). This probability density function p(x) is said to be a log-concave measure if the ϕ(x) function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of p(x). The Brascamp–Lieb inequality gives another characterization of the compactness of p(x) by bounding the mean of any statistic S(x).

Formally, let S(x) be any derivable function. The Brascamp–Lieb inequality reads:

varp(S(x))Ep(TS(x)[Hϕ(x)]1S(x))

where H is the Hessian and is the Nabla symbol.[2]

BCCT inequality

The inequality is generalized in 2008[3] to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant.

Definition: the Brascamp-Lieb datum (BL datum)

  • d,n1.
  • d1,...,dn{1,2,...,d}.
  • p1,...,pn[0,).
  • Bi:ddi are linear surjections, with zero common kernel: iker(Bi)={0}.
  • Call (B,p)=(B1,...,Bn,p1,...,pn) a Brascamp-Lieb datum (BL datum).

For any fiL1(Rdi) with fi0, defineBL(B,p,f):=Hj=1m(fjBj)pjj=1m(Hjfj)pj


Now define the Brascamp-Lieb constant for the BL datum:BL(B,p)=maxfBL(B,p,f)

Template:Math theorem

Discrete case

Setup:

  • BL datum defined as (G,G1,...,Gn,ϕ1,...ϕn)

With this setup, we have (Theorem 2.4,[4] Theorem 3.12 [5])

Template:Math theorem

Note that the constant |T(G)| is not always tight.

BL polytope

Given BL datum (B,p), the conditions for BL(B,p)< are

  • d=ipidi, and
  • for all subspace V of d,dim(V)ipidim(Bi(V))

Thus, the subset of p[0,)n that satisfies the above two conditions is a closed convex polytope defined by linear inequalities. This is the BL polytope.

Note that while there are infinitely many possible choices of subspace V of d, there are only finitely many possible equations of dim(V)ipidim(Bi(V)), so the subset is a closed convex polytope.

Similarly we can define the BL polytope for the discrete case.

Relationships to other inequalities

The geometric Brascamp–Lieb inequality

The case of the Brascamp–Lieb inequality in which all the ni are equal to 1 was proved earlier than the general case.[6] In 1989, Keith Ball introduced a "geometric form" of this inequality. Suppose that (ui)i=1m are unit vectors in n and (ci)i=1m are positive numbers satisfying

i=1mcix,uiui=x

for all xn, and that (fi)i=1m are positive measurable functions on . Then

ni=1mfi(x,ui)cidxi=1m(fi(t)dt)ci.

Thus, when the vectors (ui) resolve the inner product the inequality has a particularly simple form: the constant is equal to 1 and the extremal Gaussian densities are identical. Ball used this inequality to estimate volume ratios and isoperimetric quotients for convex sets in [7] and.[8]

There is also a geometric version of the more general inequality in which the maps Bi are orthogonal projections and

i=1mciBi=I

where I is the identity operator on n.

Hölder's inequality

Take ni = n, Bi = id, the identity map on n, replacing fi by fTemplate:Su, and let ci = 1 / pi for 1 ≤ i ≤ m. Then

i=1m1pi=1

and the log-concavity of the determinant of a positive definite matrix implies that D = 1. This yields Hölder's inequality in n:

ni=1mfi(x)dxi=1mfipi.

Poincaré inequality

The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions.[9]

Cramér–Rao bound

The Brascamp–Lieb inequality is also related to the Cramér–Rao bound.[9] While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of varp(S(x)). The Cramér–Rao bound states

varp(S(x))Ep(TS(x))[Ep(Hϕ(x))]1Ep(S(x)).

which is very similar to the Brascamp–Lieb inequality in the alternative form shown above.

References

Template:Reflist