McDiarmid's inequality

From testwiki
Revision as of 07:57, 29 January 2025 by imported>Citation bot (Altered title. Added publisher. | Use this bot. Report bugs. | Suggested by Abductive | Category:Probabilistic inequalities | #UCB_Category 51/56)
(diff) ← Older revision | Latest revision (diff) | Newer revision β†’ (diff)
Jump to navigation Jump to search

Template:Short description In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid [1]) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

Statement

A function f:𝒳1×𝒳2××𝒳nℝ satisfies the bounded differences property if substituting the value of the ith coordinate xi changes the value of f by at most ci. More formally, if there are constants c1,c2,,cn such that for all i[n], and all x1𝒳1,x2𝒳2,,xn𝒳n,

supxi𝒳i|f(x1,,xi1,xi,xi+1,,xn)f(x1,,xi1,xi,xi+1,,xn)|ci.

Template:Math theorem

Extensions

Unbalanced distributions

A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.

Template:Math theorem

This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.

Differences bounded with high probability

McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.

Template:Math theorem

There exist stronger refinements to this analysis in some distribution-dependent scenarios,[2] such as those that arise in learning theory.

Sub-Gaussian and sub-exponential norms

Let the kth centered conditional version of a function f be

fk(X)(x):=f(x1,,xk1,Xk,xk+1,,xn)𝔼X'kf(x1,,xk1,X'k,xk+1,,xn),

so that fk(X) is a random variable depending on random values of x1,,xk1,xk+1,,xn.

Template:Math theorem

Template:Math theorem

Bennett and Bernstein forms

Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let

B:=maxk[n]supx1,,xk1,xk+1,,xn|f(x1,,xk1,Xk,xk+1,,xn)𝔼Xkf(x1,,xk1,Xk,xk+1,,xn)|,Vk:=supx1,,xk1,xk+1,,xn𝔼Xk(f(x1,,xk1,Xk,xk+1,,xn)𝔼Xkf(x1,,xk1,Xk,xk+1,,xn))2,σ~2:=k=1nVk.

Template:Math theorem Template:Math theorem

Proof

The following proof of McDiarmid's inequality[3] constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality). An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.[4]

For better readability, we will introduce a notational shorthand: zij will denote zi,,zj for any z𝒳n and integers 1ijn, so that, for example,

f(X1(i1),y,x(i+1)n):=f(X1,,Xi1,y,xi+1,,xn).

Pick any x1,x2,,xn. Then, for any x1,x2,,xn, by triangle inequality,

|f(x1n)f(x'1n)||f(x1n)f(x'1(n1),xn)|+cn|f(x1n)f(x'1(n2),x(n1)n)|+cn1+cni=1nci,

and thus f is bounded.

Since f is bounded, define the Doob martingale {Zi} (each Zi being a random variable depending on the random values of X1,,Xi) as

Zi:=𝔼[f(X1n)X1i]

for all i1 and Z0:=𝔼[f(X1n)], so that Zn=f(X1n).

Now define the random variables for each i

Ui:=supx𝒳i𝔼[f(X1(i1),x,X(i+1)n)X1(i1),Xi=x][f(X1(i1),Xin)X1(i1)],Li:=infx𝒳i𝔼[f(X1(i1),x,X(i+1)n)X1(i1),Xi=x][f(X1(i1),Xin)X1(i1)].

Since Xi,,Xn are independent of each other, conditioning on Xi=x does not affect the probabilities of the other variables, so these are equal to the expressions

Ui=supx𝒳i𝔼[f(X1(i1),x,X(i+1)n)f(X1(i1),Xin)X1(i1)],Li=infx𝒳i𝔼[f(X1(i1),x,X(i+1)n)f(X1(i1),Xin)X1(i1)].

Note that LiZiZi1Ui. In addition,

UiLi=supu𝒳i,𝒳i𝔼[f(X1(i1),u,X(i+1)n)X1(i1)]𝔼[f(X1(i1),,X(i+1)n)X1(i1)]=supu𝒳i,𝒳i𝔼[f(X1(i1),u,X(i+1)n)f(X1(i1),l,X(i+1)n)X1(i1)]supxu𝒳i,xl𝒳i𝔼[ciX1(i1)]ci

Then, applying the general form of Azuma's inequality to {Zi}, we have

P(f(X1,,Xn)𝔼[f(X1,,Xn)]ε)=P(ZnZ0ε)exp(2ε2i=1nci2).

The one-sided bound in the other direction is obtained by applying Azuma's inequality to {Zi} and the two-sided bound follows from a union bound.

See also

Template:Cmn

References

Template:Reflist

Template:Portal bar

  1. ↑ Template:Cite journal
  2. ↑ Template:Cite journal
  3. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named Doob
  4. ↑ Cite error: Invalid <ref> tag; no text was provided for refs named ying