Leftover hash lemma

From testwiki
Revision as of 09:25, 25 October 2024 by imported>Catgirl-su (Rephrase article to not use the second person)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:MOS

The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby.[1]

Given a secret key Template:Mvar that has Template:Mvar uniform random bits, of which an adversary was able to learn the values of some Template:Math bits of that key, the leftover hash lemma states that it is possible to produce a key of about Template:Math bits, over which the adversary has almost no knowledge, without knowing which Template:Math are known to the adversary. Since the adversary knows all but Template:Math bits, this is almost optimal.

More precisely, the leftover hash lemma states that it is possible to extract a length asymptotic to H(X) (the min-entropy of Template:Mvar) bits from a random variable Template:Mvar) that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about Template:Mvar, will have almost no knowledge about the extracted value. This is also known as privacy amplification (see privacy amplification section in the article Quantum key distribution).

Randomness extractors achieve the same result, but use (normally) less randomness.

Let Template:Mvar be a random variable over 𝒳 and let m>0. Let h:𝒮×𝒳{0,1}m be a 2-universal hash function. If

mH(X)2log(1ε)

then for Template:Mvar uniform over 𝒮 and independent of Template:Mvar, we have:

δ[(h(S,X),S),(U,S)]ε.

where Template:Mvar is uniform over {0,1}m and independent of Template:Mvar.[2]

H(X)=logmaxxPr[X=x] is the min-entropy of Template:Mvar, which measures the amount of randomness Template:Mvar has. The min-entropy is always less than or equal to the Shannon entropy. Note that maxxPr[X=x] is the probability of correctly guessing Template:Mvar. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess Template:Mvar.

0δ(X,Y)=12v|Pr[X=v]Pr[Y=v]|1 is a statistical distance between Template:Mvar and Template:Mvar.

See also

References

Template:Reflist