Additive noise differential privacy mechanisms

From testwiki
Jump to navigation Jump to search

Template:Main

Additive noise differential privacy mechanisms are a class of techniques used to ensure differential privacy when releasing the results of computations on sensitive datasets. They work by adding carefully calibrated random noise, drawn from specific probability distributions, to the true output of a function. This added noise obscures the influence of any single individual's data, thereby protecting their privacy while still allowing for meaningful statistical analysis. Common distributions used for noise generation include the Laplace and Gaussian distributions. These mechanisms are particularly useful for functions that output real-valued numbers.

Sensitivity

Both mechanisms require that the sensitivity of a query function first be determined. The sensitivity is the amount that the result of the query can be changed by adding or removing a person's data from the dataset, where "a person" is any possible person. For queries that count the number of people who meet a requirement, the sensitivity is 1.

Formal Definition

Here is the formal definition of sensitivity.

Let 𝒟 be a collection of all datasets and f:𝒟 be a real-valued function. The sensitivity [1] of a function, denoted Δf, is defined by

Δf=max|f(x)f(y)|,

where the maximum is over all pairs of datasets x and y in 𝒟 differing in at most one element. For functions with higher dimensions, the sensitivity is usually measured under 1 or 2 norms.

Throughout this article, is used to denote a randomized algorithm that releases a sensitive function f under the ϵ- (or (ϵ,δ)-) differential privacy.

Real-valued functions

A Real-valued function is any function that returns a "real" value --- that is, a positive or negative number that can be represented by decimal fraction (e.g. 0.5, or 1.32).

The Laplace Mechanism

Introduced by Dwork et al.,[1] this mechanism adds noise drawn from a Laplace distribution:

Laplace mechanism offering .5-differential privacy for a function with sensitivity 1.
Lap(x,f,ϵ)=f(x)+Lap(μ=0,b=Δfϵ),

where μ is the expectation of the Laplace distribution and b is the scale parameter. Roughly speaking, a small-scale noise should suffice for a weak privacy constraint (corresponding to a large value of ϵ), while a greater level of noise would provide a greater degree of uncertainty in what was the original input (corresponding to a small value of ϵ).

To argue that the mechanism satisfies ϵ-differential privacy, it suffices to show that the output distribution of Lap(x,f,ϵ) is close in a multiplicative sense to Lap(y,f,ϵ) everywhere.Pr(Lap(x,f,ϵ)=z)Pr(Lap(y,f,ϵ)=z)=Pr(f(x)+Lap(0,Δfϵ)=z)Pr(f(y)+Lap(0,Δfϵ)=z)=Pr(Lap(0,Δfϵ)=zf(x))Pr(Lap(0,Δfϵ)=zf(y))=12bexp(|zf(x)|b)12bexp(|zf(y)|b)=exp(|zf(y)||zf(x)|b)exp(|f(y)f(x)|b)exp(Δfb)=exp(ϵ). The first inequality follows from the triangle inequality and the second from the sensitivity bound. A similar argument gives a lower bound of exp(ϵ).

A discrete variant of the Laplace mechanism, called the geometric mechanism, is universally utility-maximizing.[2] It means that for any prior (such as auxiliary information or beliefs about data distributions) and any symmetric and monotone univariate loss function, the expected loss of any differentially private mechanism can be matched or improved by running the geometric mechanism followed by a data-independent post-processing transformation. The result also holds for minimax (risk-averse) consumers.[3] No such universal mechanism exists for multi-variate loss functions.[4]

The Gaussian Mechanism

Analogous to Laplace mechanism, Gaussian mechanism adds noise drawn from a Gaussian distribution whose variance is calibrated according to the sensitivity and privacy parameters. For any δ(0,1) and ϵ(0,1), the mechanism defined by:

Gaussian mechanism.

Gauss(x,f,ϵ,δ)=f(x)+𝒩(μ=0,σ2=2ln(1.25/δ)(Δf)2ϵ2)

provides (ϵ,δ)-differential privacy.

Note that, unlike Laplace mechanism, Gauss only satisfies (ϵ,δ)-differential privacy with ϵ<1. To prove so, it is sufficient to show that, with probability at least 1δ, the distribution of Gauss(x,f,ϵ,δ) is close to Gauss(y,f,ϵ,δ). See Appendix A in Dwork and Roth for a proof of this result[5]).

High dimensional functions

For high dimensional functions of the form f:𝒟d, where d2, the sensitivity of f is measured under 1 or 2 norms. The equivalent Gaussian mechanism that satisfies (ϵ,δ)-differential privacy for such function (still under the assumption that ϵ<1) is

Gauss(x,f,ϵ,δ)=f(x)+𝒩d(μ=0,σ2=2ln(1.25/δ)(Δ2f)2ϵ2),

where Δ2f represents the sensitivity of f under 2 norm and 𝒩d(0,σ2) represents a d-dimensional vector, where each coordinate is a noise sampled according to 𝒩(0,σ2) independent of the other coordinates (see Appendix A in Dwork and Roth[5] for proof).

References

Template:Reflist