Learning with errors

From testwiki
Jump to navigation Jump to search

Template:Short description Template:Technical In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms.[1] It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it.[2] In more technical terms, it refers to the computational problem of inferring a linear n-ary function f over a finite ring from given samples yi=f(๐ฑi) some of which may be erroneous. The LWE problem is conjectured to be hard to solve,[1] and thus to be useful in cryptography.

More precisely, the LWE problem is defined as follows. Let โ„คq denote the ring of integers modulo q and let โ„คqn denote the set of n-vectors over โ„คq. There exists a certain unknown linear function f:โ„คqnโ„คq, and the input to the LWE problem is a sample of pairs (๐ฑ,y), where ๐ฑโ„คqn and yโ„คq, so that with high probability y=f(๐ฑ). Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function f, or some close approximation thereof, with high probability.

The LWE problem was introduced by Oded Regev in 2005[3] (who won the 2018 Gรถdel Prize for this work); it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems,[3][4] such as the ring learning with errors key exchange by Peikert.[5]

Definition

Denote by ๐•‹=โ„/โ„ค the additive group on reals modulo one. Let ๐ฌโ„คqn be a fixed vector. Let ϕ be a fixed probability distribution over ๐•‹. Denote by A๐ฌ,ϕ the distribution on โ„คqn×๐•‹ obtained as follows.

  1. Pick a vector ๐šโ„คqn from the uniform distribution over โ„คqn,
  2. Pick a number e๐•‹ from the distribution ϕ,
  3. Evaluate t=๐š,๐ฌ/q+e, where ๐š,๐ฌ=i=1naisi is the standard inner product in โ„คqn, the division is done in the field of reals (or more formally, this "division by q" is notation for the group homomorphism โ„คq๐•‹ mapping 1โ„คq to 1/q+โ„ค๐•‹), and the final addition is in ๐•‹.
  4. Output the pair (๐š,t).

The learning with errors problem LWEq,ϕ is to find ๐ฌโ„คqn, given access to polynomially many samples of choice from A๐ฌ,ϕ.

For every α>0, denote by Dα the one-dimensional Gaussian with zero mean and variance α2/(2π), that is, the density function is Dα(x)=ρα(x)/α where ρα(x)=eπ(|x|/α)2, and let Ψα be the distribution on ๐•‹ obtained by considering Dα modulo one. The version of LWE considered in most of the results would be LWEq,Ψα

Decision version

The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from โ„คqn×๐•‹ (practically, some discretized version of it). Regev[3] showed that the decision and search versions are equivalent when q is a prime bounded by some polynomial in n.

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by {(๐ši,๐›i)}โ„คqn×๐•‹. If the solver returns a candidate ๐ฌ, for all i, calculate {๐ši,๐ฌ๐›i}. If the samples are from an LWE distribution, then the results of this calculation will be distributed according χ, but if the samples are uniformly random, these quantities will be distributed uniformly as well.

Solving search assuming decision

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover ๐ฌ one coordinate at a time. To obtain the first coordinate, ๐ฌ1, make a guess kโ„คq, and do the following. Choose a number rโ„คq uniformly at random. Transform the given samples {(๐ši,๐›i)}โ„คqn×๐•‹ as follows. Calculate {(๐ši+(r,0,,0),๐›i+(rk)/q)}. Send the transformed samples to the decision solver.

If the guess k was correct, the transformation takes the distribution A๐ฌ,χ to itself, and otherwise, since q is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since q is bounded by some polynomial in n, it only takes polynomial time to guess every possible value for k and use the solver to see which one is correct.

After obtaining ๐ฌ1, we follow an analogous procedure for each other coordinate ๐ฌj. Namely, we transform our ๐›i samples the same way, and transform our ๐ši samples by calculating ๐ši+(0,,r,,0), where the r is in the jth coordinate.[3]

Peikert[4] showed that this reduction, with a small modification, works for any q that is a product of distinct, small (polynomial in n) primes. The main idea is if q=q1q2qt, for each q, guess and check to see if ๐ฌj is congruent to 0modq, and then use the Chinese remainder theorem to recover ๐ฌj.

Average case hardness

Regev[3] showed the random self-reducibility of the LWE and DLWE problems for arbitrary q and χ. Given samples {(๐ši,๐›i)} from A๐ฌ,χ, it is easy to see that {(๐ši,๐›i+๐ši,๐ญ)/q} are samples from A๐ฌ+๐ญ,χ.

So, suppose there was some set ๐’ฎโ„คqn such that |๐’ฎ|/|โ„คqn|=1/poly(n), and for distributions A๐ฌ,χ, with ๐ฌ๐’ฎ, DLWE was easy.

Then there would be some distinguisher ๐’œ, who, given samples {(๐ši,๐›i)}, could tell whether they were uniformly random or from A๐ฌ,χ. If we need to distinguish uniformly random samples from A๐ฌ,χ, where ๐ฌ is chosen uniformly at random from โ„คqn, we could simply try different values ๐ญ sampled uniformly at random from โ„คqn, calculate {(๐ši,๐›i+๐ši,๐ญ)/q} and feed these samples to ๐’œ. Since ๐’ฎ comprises a large fraction of โ„คqn, with high probability, if we choose a polynomial number of values for ๐ญ, we will find one such that ๐ฌ+๐ญ๐’ฎ, and ๐’œ will successfully distinguish the samples.

Thus, no such ๐’ฎ can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.

Hardness results

Regev's result

For a n-dimensional lattice L, let smoothing parameter ηε(L) denote the smallest s such that ρ1/s(L*{๐ŸŽ})ε where L* is the dual of L and ρα(x)=eπ(|x|/α)2 is extended to sets by summing over function values at each element in the set. Let DL,r denote the discrete Gaussian distribution on L of width r for a lattice L and real r>0. The probability of each xL is proportional to ρr(x).

The discrete Gaussian sampling problem(DGS) is defined as follows: An instance of DGSϕ is given by an n-dimensional lattice L and a number rϕ(L). The goal is to output a sample from DL,r. Regev shows that there is a reduction from GapSVP100nγ(n) to DGSnγ(n)/λ(L*) for any function γ(n)1.

Regev then shows that there exists an efficient quantum algorithm for DGS2nηε(L)/α given access to an oracle for LWEq,Ψα for integer q and α(0,1) such that αq>2n. This implies the hardness for LWE. Although the proof of this assertion works for any q, for creating a cryptosystem, the modulus q has to be polynomial in n.

Peikert's result

Peikert proves[4] that there is a probabilistic polynomial time reduction from the GapSVPζ,γ problem in the worst case to solving LWEq,Ψα using poly(n) samples for parameters α(0,1), γ(n)n/(αlogn), ζ(n)γ(n) and q(ζ/n)ωlogn).

Use in cryptography

The LWE problem serves as a versatile problem used in construction of several[3][4][6][7] cryptosystems. In 2005, Regev[3] showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems GapSVPγ (for γ as above) and SIVPt with t=O(n/α)). In 2009, Peikert[4] proved a similar result assuming only the classical hardness of the related problem GapSVPζ,γ. The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.

Public-key cryptosystem

Regev[3] proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by m,q and a probability distribution χ on ๐•‹. The setting of the parameters used in proofs of correctness and security is

  • q2, usually a prime number between n2 and 2n2.
  • m=(1+ε)(n+1)logq for an arbitrary constant ε
  • χ=Ψα(n) for α(n)o(1/nlogn), where Ψβ is a probability distribution obtained by sampling a normal variable with mean 0 and standard variation β2π and reducing the result modulo 1.

The cryptosystem is then defined by:

  • Private key: Private key is an ๐ฌโ„คqn chosen uniformly at random.
  • Public key: Choose m vectors ๐š1,,๐šmโ„คqn uniformly and independently. Choose error offsets e1,,em๐•‹ independently according to χ. The public key consists of (๐ši,bi=๐ši,๐ฌ/q+ei)i=1m
  • Encryption: The encryption of a bit x{0,1} is done by choosing a random subset S of [m] and then defining Enc(x) as
(iS๐ši,x2+iSbi)
  • Decryption: The decryption of (๐š,b) is 0 if b๐š,๐ฌ/q is closer to 0 than to 12, and 1 otherwise.

The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of 0 and 1 can be used to distinguish between As,χ and the uniform distribution over โ„คqn×๐•‹

CCA-secure cryptosystem

Template:Expand section Peikert[4] proposed a system that is secure even against any chosen-ciphertext attack.

Key exchange

Template:Main The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper[8] appeared in 2012 after a provisional patent application was filed in 2012.

The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme[9] following the same basic idea of Ding's, where the new idea of sending an additional 1-bit signal for rounding in Ding's construction is also used. The "new hope" implementation[10] selected for Google's post-quantum experiment,[11] uses Peikert's scheme with variation in the error distribution.

Ring learning with errors signature (RLWE-SIG)

Main article: Ring learning with errors signature

A RLWE version of the classic Feigeโ€“Fiatโ€“Shamir Identification protocol was created and converted to a digital signature in 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography โ€“ A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems.

See also

References

  1. โ†‘ 1.0 1.1 Template:Cite journal
  2. โ†‘ Template:Cite journal
  3. โ†‘ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 Oded Regev, โ€œOn lattices, learning with errors, random linear codes, and cryptography,โ€ in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84โ€“93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
  4. โ†‘ 4.0 4.1 4.2 4.3 4.4 4.5 Chris Peikert, โ€œPublic-key cryptosystems from the worst-case shortest vector problem: extended abstract,โ€ in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333โ€“342, http://portal.acm.org/citation.cfm?id=1536414.1536461.
  5. โ†‘ Template:Cite book
  6. โ†‘ Chris Peikert and Brent Waters, โ€œLossy trapdoor functions and their applications,โ€ in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 187-196, http://portal.acm.org/citation.cfm?id=1374406.
  7. โ†‘ Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan, โ€œTrapdoors for hard lattices and new cryptographic constructions,โ€ in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 197-206, http://portal.acm.org/citation.cfm?id=1374407.
  8. โ†‘ Template:Cite journal
  9. โ†‘ Template:Cite journal
  10. โ†‘ Template:Cite journal
  11. โ†‘ Template:Cite news

Template:Computational hardness assumptions