Bennett's inequality
In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962.[1]
Statement
Let Template:Math be independent random variables with finite variance. Further assume Template:Math almost surely for all Template:Math, and define and Then for any Template:Math,
where Template:Math and log denotes the natural logarithm.[2][3]
Generalizations and comparisons to other bounds
For generalizations see Freedman (1975)[4] and Fan, Grama and Liu (2012)[5] for a martingale version of Bennett's inequality and its improvement, respectively.
Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails.Template:Cn
Bennett's inequality is most similar to the Bernstein inequalities, the first of which also gives concentration in terms of the variance and almost sure bound on the individual terms. Bennett's inequality is stronger than this bound, but more complicated to compute.[3]
In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions.Template:Cn
Example
Suppose that each Template:Math is an independent binary random variable with probability Template:Math. Then Bennett's inequality says that:
For , so
for .
By contrast, Hoeffding's inequality gives a bound of and the first Bernstein inequality gives a bound of . For , Hoeffding's inequality gives , Bernstein gives , and Bennett gives .
See also
- Concentration inequality - a summary of tail-bounds on random variables.