Q-function

From testwiki
Revision as of 20:30, 11 August 2024 by imported>Citation bot (Alter: doi, title, issue, pages. Formatted dashes. | Use this bot. Report bugs. | Suggested by Abductive | Category:Special functions | #UCB_Category 12/143)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:For

A plot of the Q-function.

In statistics, the Q-function is the tail distribution function of the standard normal distribution.[1][2] In other words, Q(x) is the probability that a normal (Gaussian) random variable will obtain a value larger than x standard deviations. Equivalently, Q(x) is the probability that a standard normal random variable takes a value larger than x.

If Y is a Gaussian random variable with mean μ and variance σ2, then X=Yμσ is standard normal and

P(Y>y)=P(X>x)=Q(x)

where x=yμσ.

Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[3]

Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.

Definition and basic properties

Formally, the Q-function is defined as

Q(x)=12πxexp(u22)du.

Thus,

Q(x)=1Q(x)=1Φ(x),

where Φ(x) is the cumulative distribution function of the standard normal Gaussian distribution.

The Q-function can be expressed in terms of the error function, or the complementary error function, as[2]

Q(x)=12(2πx/2exp(t2)dt)=1212erf(x2) -or-=12erfc(x2).

An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:[4]

Q(x)=1π0π2exp(x22sin2θ)dθ.

This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.

Craig's formula was later extended by Behnad (2020)[5] for the Q-function of the sum of two non-negative variables, as follows:

the Q-function plotted in the complex plane
the Q-function plotted in the complex plane
Q(x+y)=1π0π2exp(x22sin2θy22cos2θ)dθ,x,y0.

Bounds and approximations

(x1+x2)ϕ(x)<Q(x)<ϕ(x)x,x>0,
where ϕ(x) is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.
Using the substitution v =u2/2, the upper bound is derived as follows:
Q(x)=xϕ(u)du<xuxϕ(u)du=x22evx2πdv=evx2π|x22=ϕ(x)x.
Similarly, using ϕ(u)=uϕ(u) and the quotient rule,
(1+1x2)Q(x)=x(1+1x2)ϕ(u)du>x(1+1u2)ϕ(u)du=ϕ(u)u|x=ϕ(x)x.
Solving for Q(x) provides the lower bound.
The geometric mean of the upper and lower bound gives a suitable approximation for Q(x):
Q(x)ϕ(x)1+x2,x0.
  • Tighter bounds and approximations of Q(x) can also be obtained by optimizing the following expression [7]
Q~(x)=ϕ(x)(1a)x+ax2+b.
For x0, the best upper bound is given by a=0.344 and b=5.334 with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by a=0.339 and b=5.510 with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by a=1/π and b=2π with maximum absolute relative error of 1.17%.
Q(x)ex22,x>0
  • Improved exponential bounds and a pure exponential approximation are [8]
Q(x)14ex2+14ex2212ex22,x>0
Q(x)112ex22+14e23x2,x>0
  • The above were generalized by Tanash & Riihonen (2020),[9] who showed that Q(x) can be accurately approximated or bounded by
Q~(x)=n=1Nanebnx2.
In particular, they presented a systematic methodology to solve the numerical coefficients {(an,bn)}n=1N that yield a minimax approximation or bound: Q(x)Q~(x), Q(x)Q~(x), or Q(x)Q~(x) for x0. With the example coefficients tabulated in the paper for N=20, the relative and absolute approximation errors are less than 2.831106 and 1.416106, respectively. The coefficients {(an,bn)}n=1N for many variations of the exponential approximations and bounds up to N=25 have been released to open access as a comprehensive dataset.[10]
  • Another approximation of Q(x) for x[0,) is given by Karagiannidis & Lioumpas (2007)[11] who showed for the appropriate choice of parameters {A,B} that
f(x;A,B)=(1eAx)ex2Bπxerfc(x).
The absolute error between f(x;A,B) and erfc(x) over the range [0,R] is minimized by evaluating
{A,B}=argmin{A,B}1R0R|f(x;A,B)erfc(x)|dx.
Using R=20 and numerically integrating, they found the minimum error occurred when {A,B}={1.98,1.135}, which gave a good approximation for x0.
Substituting these values and using the relationship between Q(x) and erfc(x) from above gives
Q(x)(1e1.98x2)ex221.1352πx,x0.
Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.[12]
  • A tighter and more tractable approximation of Q(x) for positive arguments x[0,) is given by López-Benítez & Casadevall (2011)[13] based on a second-order exponential function:
Q(x)eax2bxc,x0.
The fitting coefficients (a,b,c) can be optimized over any desired range of arguments in order to minimize the sum of square errors (a=0.3842, b=0.7640, c=0.6964 for x[0,20]) or minimize the maximum absolute error (a=0.4920, b=0.2887, c=1.1893 for x[0,20]). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of Q(x) is trivial and does not alter the algebraic form of the approximation).

Inverse Q

The inverse Q-function can be related to the inverse error functions:

Q1(y)=2 erf1(12y)=2 erfc1(2y)

The function Q1(y) finds application in digital communications. It is usually expressed in dB and generally called Q-factor:

Q-factor=20log10(Q1(y))dB

where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.

Q-factor vs. bit error rate (BER).

Values

The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.

Template:Col-begin Template:Col-4

Q(0.0) 0.500000000 1/2.0000
Q(0.1) 0.460172163 1/2.1731
Q(0.2) 0.420740291 1/2.3768
Q(0.3) 0.382088578 1/2.6172
Q(0.4) 0.344578258 1/2.9021
Q(0.5) 0.308537539 1/3.2411
Q(0.6) 0.274253118 1/3.6463
Q(0.7) 0.241963652 1/4.1329
Q(0.8) 0.211855399 1/4.7202
Q(0.9) 0.184060125 1/5.4330

Template:Col-4

Q(1.0) 0.158655254 1/6.3030
Q(1.1) 0.135666061 1/7.3710
Q(1.2) 0.115069670 1/8.6904
Q(1.3) 0.096800485 1/10.3305
Q(1.4) 0.080756659 1/12.3829
Q(1.5) 0.066807201 1/14.9684
Q(1.6) 0.054799292 1/18.2484
Q(1.7) 0.044565463 1/22.4389
Q(1.8) 0.035930319 1/27.8316
Q(1.9) 0.028716560 1/34.8231

Template:Col-4

Q(2.0) 0.022750132 1/43.9558
Q(2.1) 0.017864421 1/55.9772
Q(2.2) 0.013903448 1/71.9246
Q(2.3) 0.010724110 1/93.2478
Q(2.4) 0.008197536 1/121.9879
Q(2.5) 0.006209665 1/161.0393
Q(2.6) 0.004661188 1/214.5376
Q(2.7) 0.003466974 1/288.4360
Q(2.8) 0.002555130 1/391.3695
Q(2.9) 0.001865813 1/535.9593

Template:Col-4

Q(3.0) 0.001349898 1/740.7967
Q(3.1) 0.000967603 1/1033.4815
Q(3.2) 0.000687138 1/1455.3119
Q(3.3) 0.000483424 1/2068.5769
Q(3.4) 0.000336929 1/2967.9820
Q(3.5) 0.000232629 1/4298.6887
Q(3.6) 0.000159109 1/6285.0158
Q(3.7) 0.000107800 1/9276.4608
Q(3.8) 0.000072348 1/13822.0738
Q(3.9) 0.000048096 1/20791.6011
Q(4.0) 0.000031671 1/31574.3855

Template:Col-end

Generalization to high dimensions

The Q-function can be generalized to higher dimensions:[14]

Q(𝐱)=(𝐗𝐱),

where 𝐗𝒩(𝟎,Σ) follows the multivariate normal distribution with covariance Σ and the threshold is of the form 𝐱=γΣ𝐥* for some positive vector 𝐥*>𝟎 and positive constant γ>0. As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as γ becomes larger and larger.[15][16]

References