Characteristic function (probability theory)

From testwiki
Revision as of 09:23, 4 January 2025 by imported>Roffaduft (Definition: added page number)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description

The characteristic function of a uniform U(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however characteristic functions may generally be complex-valued.

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.

The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.

Introduction

The characteristic function is a way to describe a random variable Template:Mvar. The characteristic function,

φX(t)=E[eitX],

a function of Template:Mvar, determines the behavior and properties of the probability distribution of Template:Mvar. It is equivalent to a probability density function or cumulative distribution function, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions.

If a random variable admits a density function, then the characteristic function is its Fourier dual, in the sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating function MX(t), then the domain of the characteristic function can be extended to the complex plane, and

φX(it)=MX(t).Template:Sfnp

Note however that the characteristic function of a distribution is well defined for all real values of Template:Mvar, even when the moment-generating function is not well defined for all real values of Template:Mvar.

The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables.

Definition

For a scalar random variable Template:Mvar the characteristic function is defined as the expected value of Template:Math, where Template:Mvar is the imaginary unit, and Template:Math is the argument of the characteristic function:

{φX:φX(t)=E[eitX]=eitxdFX(x)=eitxfX(x)dx=01eitQX(p)dp

Here Template:Math is the cumulative distribution function of Template:Mvar, Template:Math is the corresponding probability density function, Template:Math is the corresponding inverse cumulative distribution function also called the quantile function,[1] and the integrals are of the Riemann–Stieltjes kind. If a random variable Template:Mvar has a probability density function then the characteristic function is its Fourier transform with sign reversal in the complex exponential.[2]Template:Sfnp This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.Template:Sfnp For example, some authorsTemplate:Sfnp define Template:Math, which is essentially a change of parameter. Other notation may be encountered in the literature: p^ as the characteristic function for a probability measure Template:Mvar, or f^ as the characteristic function corresponding to a density Template:Mvar.

Template:Anchor Generalizations

The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where the random variable Template:Mvar takes its values. For common cases such definitions are listed below:

Examples

Distribution Characteristic function ϕ(t)
Degenerate Template:Math eita
Bernoulli Template:Math 1p+peit
Binomial Template:Math (1p+peit)n
Negative binomial Template:Math (p1eit+peit)r
Poisson Template:Math eλ(eit1)
Uniform (continuous) Template:Math eitbeitait(ba)
Uniform (discrete) Template:Math eitaeit(b+1)(1eit)(ba+1)
Laplace Template:Math eitμ1+b2t2
Logistic Template:Math
eiμtπstsinh(πst)
Normal Template:Math eitμ12σ2t2
Chi-squared Template:Math (12it)k/2
Noncentral chi-squared χ'k2 eiλt12it(12it)k/2
Generalized chi-squared χ~(𝒘,𝒌,λ,s,m) exp[it(m+jwjλj12iwjt)s2t22]j(12iwjt)kj/2
Cauchy Template:Math eitμθ|t|
Gamma Template:Math (1itθ)k
Exponential Template:Math (1itλ1)1
Geometric Template:Math
(number of failures)
p1eit(1p)
Geometric Template:Math
(number of trials)
peit(1p)
Multivariate normal Template:Math ei𝐭Tμ12𝐭TΣ𝐭
Multivariate Cauchy Template:Math[3] ei𝐭Tμ𝐭TΣ𝐭

Oberhettinger (1973) provides extensive tables of characteristic functions.

Properties

  • The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose measure is finite.
  • A characteristic function is uniformly continuous on the entire space.
  • It is non-vanishing in a region around zero: Template:Math.
  • It is bounded: Template:Math.
  • It is Hermitian: Template:Math. In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even.
  • There is a bijection between probability distributions and characteristic functions. That is, for any two random variables Template:Math, Template:Math, both have the same probability distribution if and only if φX1=φX2. Template:Citation needed
  • If a random variable Template:Mvar has moments up to Template:Mvar-th order, then the characteristic function Template:Math is Template:Mvar times continuously differentiable on the entire real line. In this case E[Xk]=ikφX(k)(0).
  • If a characteristic function Template:Math has a Template:Mvar-th derivative at zero, then the random variable Template:Mvar has all moments up to Template:Mvar if Template:Mvar is even, but only up to Template:Math if Template:Mvar is odd.Template:Sfnp φX(k)(0)=ikE[Xk]
  • If Template:Math are independent random variables, and Template:Math are some constants, then the characteristic function of the linear combination of the Template:Math variables is φa1X1++anXn(t)=φX1(a1t)φXn(ant). One specific case is the sum of two independent random variables Template:Math and Template:Math in which case one has φX1+X2(t)=φX1(t)φX2(t).
  • Let X and Y be two random variables with characteristic functions φX and φY. X and Y are independent if and only if φX,Y(s,t)=φX(s)φY(t) for all (s,t)2.
  • The tail behavior of the characteristic function determines the smoothness of the corresponding density function.
  • Let the random variable Y=aX+b be the linear transformation of a random variable X. The characteristic function of Y is φY(t)=eitbφX(at). For random vectors X and Y=AX+B (where Template:Mvar is a constant matrix and Template:Mvar a constant vector), we have φY(t)=eitBφX(At).[4]

Continuity

The bijection stated above between probability distributions and characteristic functions is sequentially continuous. That is, whenever a sequence of distribution functions Template:Math converges (weakly) to some distribution Template:Math, the corresponding sequence of characteristic functions Template:Math will also converge, and the limit Template:Math will correspond to the characteristic function of law Template:Mvar. More formally, this is stated as

Lévy’s continuity theorem: A sequence Template:Math of Template:Mvar-variate random variables converges in distribution to random variable Template:Mvar if and only if the sequence Template:Math converges pointwise to a function Template:Mvar which is continuous at the origin. Where Template:Mvar is the characteristic function of Template:Mvar.Template:Sfnp

This theorem can be used to prove the law of large numbers and the central limit theorem.

Inversion formula

There is a one-to-one correspondence between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute Template:Mvar when we know the distribution function Template:Mvar (or density Template:Mvar). If, on the other hand, we know the characteristic function Template:Mvar and want to find the corresponding distribution function, then one of the following inversion theorems can be used.

Theorem. If the characteristic function Template:Math of a random variable Template:Mvar is integrable, then Template:Math is absolutely continuous, and therefore Template:Mvar has a probability density function. In the univariate case (i.e. when Template:Mvar is scalar-valued) the density function is given by fX(x)=FX(x)=12π𝐑eitxφX(t)dt.

In the multivariate case it is fX(x)=1(2π)n𝐑nei(tx)φX(t)λ(dt)

where tx is the dot product.

The density function is the Radon–Nikodym derivative of the distribution Template:Math with respect to the Lebesgue measure Template:Mvar: fX(x)=dμXdλ(x).

Theorem (Lévy).Template:NoteTag If Template:Math is characteristic function of distribution function Template:Math, two points Template:Math are such that Template:Math is a continuity set of Template:Math (in the univariate case this condition is equivalent to continuity of Template:Math at points Template:Mvar and Template:Mvar), then

  • If Template:Mvar is scalar: FX(b)FX(a)=12πlimTT+TeitaeitbitφX(t)dt. This formula can be re-stated in a form more convenient for numerical computation asTemplate:Sfnp F(x+h)F(xh)2h=12πsinhthteitxφX(t)dt. For a random variable bounded from below one can obtain F(b) by taking a such that F(a)=0. Otherwise, if a random variable is not bounded from below, the limit for a gives F(b), but is numerically impractical.Template:Sfnp
  • If Template:Mvar is a vector random variable: μX({a<x<b})=1(2π)nlimT1limTnT1t1T1TntnTnk=1n(eitkakeitkbkitk)φX(t)λ(dt1××dtn)

Theorem. If Template:Mvar is (possibly) an atom of Template:Mvar (in the univariate case this means a point of discontinuity of Template:Math) then

  • If Template:Mvar is scalar: FX(a)FX(a0)=limT12TT+TeitaφX(t)dt
  • If Template:Mvar is a vector random variable:Template:Sfnp μX({a})=limT1limTn(k=1n12Tk)[T1,T1]××[Tn,Tn]ei(ta)φX(t)λ(dt)

Theorem (Gil-Pelaez).Template:Sfnp For a univariate random variable Template:Mvar, if Template:Mvar is a continuity point of Template:Math then

FX(x)=121π0Im[eitxφX(t)]tdt

where the imaginary part of a complex number z is given by Im(z)=(zz*)/2i.

And its density function is:

fX(x)=1π0Re[eitxφX(t)]dt

The integral may be not Lebesgue-integrable; for example, when Template:Mvar is the discrete random variable that is always 0, it becomes the Dirichlet integral.

Inversion formulas for multivariate distributions are available.Template:SfnpTemplate:Sfnp

Criteria for characteristic functions

The set of all characteristic functions is closed under certain operations:

  • A convex linear combination nanφn(t) (with an0, nan=1) of a finite or a countable number of characteristic functions is also a characteristic function.
  • The product of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin.
  • If Template:Mvar is a characteristic function and Template:Mvar is a real number, then φ¯, Template:Math, and Template:Math are also characteristic functions.

It is well known that any non-decreasing càdlàg function Template:Mvar with limits Template:Math, Template:Math corresponds to a cumulative distribution function of some random variable. There is also interest in finding similar simple criteria for when a given function Template:Mvar could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.Template:Sfnp

Bochner’s theorem. An arbitrary function Template:Math is the characteristic function of some random variable if and only if Template:Mvar is positive definite, continuous at the origin, and if Template:Math.

Khinchine’s criterion. A complex-valued, absolutely continuous function Template:Mvar, with Template:Math, is a characteristic function if and only if it admits the representation

φ(t)=𝐑g(t+θ)g(θ)dθ.

Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function Template:Mvar, with Template:Math, is a characteristic function if and only if

(1)n(𝐑φ(pt)et2/2H2n(t)dt)0

for Template:Math, and all Template:Math. Here Template:Math denotes the Hermite polynomial of degree Template:Math.

Pólya’s theorem can be used to construct an example of two random variables whose characteristic functions coincide over a finite interval but are different elsewhere.

Pólya’s theorem. If φ is a real-valued, even, continuous function which satisfies the conditions

  • φ(0)=1,
  • φ is convex for t>0,
  • φ()=0,

then Template:Math is the characteristic function of an absolutely continuous distribution symmetric about 0.

Uses

Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.

Basic manipulations of distributions

Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if Template:Math, Template:Math, ..., Template:Math is a sequence of independent (and not necessarily identically distributed) random variables, and

Sn=i=1naiXi,

where the Template:Math are constants, then the characteristic function for Template:Math is given by

φSn(t)=φX1(a1t)φX2(a2t)φXn(ant)

In particular, Template:Math. To see this, write out the definition of characteristic function:

φX+Y(t)=E[eit(X+Y)]=E[eitXeitY]=E[eitX]E[eitY]=φX(t)φY(t)

The independence of Template:Mvar and Template:Mvar is required to establish the equality of the third and fourth expressions.

Another special case of interest for identically distributed random variables is when Template:Math and then Sn is the sample mean. In this case, writing Template:Math for the mean,

φX(t)=φX(tn)n

Moments

Characteristic functions can also be used to find moments of a random variable. Provided that the Template:Mvar-th moment exists, the characteristic function can be differentiated Template:Mvar times:

E[Xn]=in[dndtnφX(t)]t=0=inφX(n)(0),

This can be formally written using the derivatives of the Dirac delta function:fX(x)=n=0(1)nn!δ(n)(x)E[Xn]which allows a formal solution to the moment problem. For example, suppose Template:Mvar has a standard Cauchy distribution. Then Template:Math. This is not differentiable at Template:Math, showing that the Cauchy distribution has no expectation. Also, the characteristic function of the sample mean Template:Math of Template:Mvar independent observations has characteristic function Template:Math, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.

As a further example, suppose Template:Mvar follows a Gaussian distribution i.e. X𝒩(μ,σ2). Then φX(t)=eμit12σ2t2 and

E[X]=i1[ddtφX(t)]t=0=i1[(iμσ2t)φX(t)]t=0=μ

A similar calculation shows E[X2]=μ2+σ2 and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate E[X2].

The logarithm of a characteristic function is a cumulant generating function, which is useful for finding cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the second cumulant generating function.

Data analysis

Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975)Template:Sfnp and Heathcote (1977)Template:Sfnp provide some theoretical background for such an estimation procedure. In addition, Yu (2004)Template:Sfnp describes applications of empirical characteristic functions to fit time series models where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020)Template:Sfnp and Li et al. (2020)Template:Sfnp for training generative adversarial networks.

Example

The gamma distribution with scale parameter θ and a shape parameter Template:Mvar has the characteristic function

(1θit)k.

Now suppose that we have

XΓ(k1,θ) and YΓ(k2,θ)

with Template:Mvar and Template:Mvar independent from each other, and we wish to know what the distribution of Template:Math is. The characteristic functions are

φX(t)=(1θit)k1,φY(t)=(1θit)k2

which by independence and the basic properties of characteristic function leads to

φX+Y(t)=φX(t)φY(t)=(1θit)k1(1θit)k2=(1θit)(k1+k2).

This is the characteristic function of the gamma distribution scale parameter Template:Mvar and shape parameter Template:Math, and we therefore conclude

X+YΓ(k1+k2,θ)

The result can be expanded to Template:Mvar independent gamma distributed random variables with the same scale parameter and we get

i{1,,n}:XiΓ(ki,θ)i=1nXiΓ(i=1nki,θ).

Entire characteristic functions

Template:Expand section As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytic continuation, in cases where this is possible.Template:Sfnp

Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.

The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function Template:Math is the complex conjugate of the continuous Fourier transform of Template:Math (according to the usual convention; see continuous Fourier transform – other conventions).

φX(t)=eitX=𝐑eitxp(x)dx=(𝐑eitxp(x)dx)=P(t),

where Template:Math denotes the continuous Fourier transform of the probability density function Template:Math. Likewise, Template:Math may be recovered from Template:Math through the inverse Fourier transform:

p(x)=12π𝐑eitxP(t)dt=12π𝐑eitxφX(t)dt.

Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.

Another related concept is the representation of probability distributions as elements of a reproducing kernel Hilbert space via the kernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of the kernel function.

See also

  • Subindependence, a weaker condition than independence, that is defined in terms of characteristic functions.
  • Cumulant, a term of the cumulant generating functions, which are logs of the characteristic functions.

Notes

Template:NoteFoot

References

Citations

Template:Reflist

Sources

Template:Refbegin

Template:Refend

Template:Clear Template:Theory of probability distributions

  1. Template:Cite arXiv
  2. Template:Harvp
  3. Template:Harvp using 1 as the number of degree of freedom to recover the Cauchy distribution
  4. Template:Cite web