Bussgang theorem

From testwiki
Jump to navigation Jump to search

In mathematics, the Bussgang theorem is a theorem of stochastic analysis. The theorem states that the cross-correlation between a Gaussian signal before and after it has passed through a nonlinear operation are equal to the signals auto-correlation up to a constant. It was first published by Julian J. Bussgang in 1952 while he was at the Massachusetts Institute of Technology.[1]

Statement

Let {X(t)} be a zero-mean stationary Gaussian random process and {Y(t)}=g(X(t)) where g() is a nonlinear amplitude distortion.

If RX(τ) is the autocorrelation function of {X(t)}, then the cross-correlation function of {X(t)} and {Y(t)} is

RXY(τ)=CRX(τ),

where C is a constant that depends only on g().

It can be further shown that

C=1σ32πug(u)eu22σ2du.

Derivation for One-bit Quantization

It is a property of the two-dimensional normal distribution that the joint density of y1 and y2 depends only on their covariance and is given explicitly by the expression

p(y1,y2)=12π1ρ2ey12+y222ρy1y22(1ρ2)

where y1 and y2 are standard Gaussian random variables with correlation ϕy1y2=ρ.

Assume that r2=Q(y2), the correlation between y1 and r2 is,

ϕy1r2=12π1ρ2y1Q(y2)ey12+y222ρy1y22(1ρ2)dy1dy2.

Since

y1e12(1ρ2)y12+ρy21ρ2y1dy1=ρ2π(1ρ2)y2eρ2y222(1ρ2),

the correlation ϕy1r2 may be simplified as

ϕy1r2=ρ2πy2Q(y2)ey222dy2.

The integral above is seen to depend only on the distortion characteristic Q() and is independent of ρ.

Remembering that ρ=ϕy1y2, we observe that for a given distortion characteristic Q(), the ratio ϕy1r2ϕy1y2 is KQ=12πy2Q(y2)ey222dy2.

Therefore, the correlation can be rewritten in the form

ϕy1r2=KQϕy1y2

.

The above equation is the mathematical expression of the stated "Bussgang‘s theorem".

If Q(x)=sign(x), or called one-bit quantization, then KQ=22π0y2ey222dy2=2π.

[2][3][1][4]

Arcsine law

If the two random variables are both distorted, i.e.,

r1=Q(y1),r2=Q(y2)

, the correlation of

r1

and

r2

is

ϕr1r2=Q(y1)Q(y2)p(y1,y2)dy1dy2

.

When

Q(x)=sign(x)

, the expression becomes,

ϕr1r2=12π1ρ2[00eαdy1dy2+00eαdy1dy200eαdy1dy200eαdy1dy2]

where

α=y12+y222ρy1y22(1ρ2)

.

Noticing that

p(y1,y2)dy1dy2=12π1ρ2[00eαdy1dy2+00eαdy1dy2+00eαdy1dy2+00eαdy1dy2]=1,

and 00eαdy1dy2=00eαdy1dy2, 00eαdy1dy2=00eαdy1dy2,

we can simplify the expression of

ϕr1r2

as

ϕr1r2=42π1ρ200eαdy1dy21

Also, it is convenient to introduce the polar coordinate

y1=Rcosθ,y2=Rsinθ

. It is thus found that

ϕr1r2=42π1ρ20π/20eR22R2ρcosθsinθ 2(1ρ2)RdRdθ1=42π1ρ20π/20eR2(1ρsin2θ)2(1ρ2)RdRdθ1.

Integration gives

ϕr1r2=21ρ2π0π/2dθ1ρsin2θ1=2πarctan(ρtanθ1ρ2)|0π/21=2πarcsin(ρ)

This is called "Arcsine law", which was first found by J. H. Van Vleck in 1943 and republished in 1966.[2][3] The "Arcsine law" can also be proved in a simpler way by applying Price's Theorem.[4][5]

The function f(x)=2πarcsinx can be approximated as f(x)2πx when x is small.

Price's Theorem

Given two jointly normal random variables

y1

and

y2

with joint probability function

p(y1,y2)=12π1ρ2ey12+y222ρy1y22(1ρ2)

,

we form the mean

I(ρ)=E(g(y1,y2))=++g(y1,y2)p(y1,y2)dy1dy2

of some function

g(y1,y2)

of

(y1,y2)

. If

g(y1,y2)p(y1,y2)0

as

(y1,y2)0

, then

nI(ρ)ρn=2ng(y1,y2)y1ny2np(y1,y2)dy1dy2=E(2ng(y1,y2)y1ny2n)

.

Proof. The joint characteristic function of the random variables

y1

and

y2

is by definition the integral

Φ(ω1,ω2)=p(y1,y2)ej(ω1y1+ω2y2)dy1dy2=exp{ω12+ω22+2ρω1ω22}

.

From the two-dimensional inversion formula of Fourier transform, it follows that

p(y1,y2)=14π2Φ(ω1,ω2)ej(ω1y1+ω2y2)dω1dω2=14π2exp{ω12+ω22+2ρω1ω22}ej(ω1y1+ω2y2)dω1dω2

.

Therefore, plugging the expression of

p(y1,y2)

into

I(ρ)

, and differentiating with respect to

ρ

, we obtain

nI(ρ)ρn=g(y1,y2)p(y1,y2)dy1dy2=g(y1,y2)(14π2nΦ(ω1,ω2)ρnej(ω1y1+ω2y2)dω1dω2)dy1dy2=g(y1,y2)((1)n4π2ω1nω2nΦ(ω1,ω2)ej(ω1y1+ω2y2)dω1dω2)dy1dy2=g(y1,y2)(14π2Φ(ω1,ω2)2nej(ω1y1+ω2y2)y1ny2ndω1dω2)dy1dy2=g(y1,y2)2np(y1,y2)y1ny2ndy1dy2

After repeated integration by parts and using the condition at

, we obtain the Price's theorem.

nI(ρ)ρn=g(y1,y2)2np(y1,y2)y1ny2ndy1dy2=2g(y1,y2)y1y22n2p(y1,y2)y1n1y2n1dy1dy2==2ng(y1,y2)y1ny2np(y1,y2)dy1dy2

[4][5]

Proof of Arcsine law by Price's Theorem

If g(y1,y2)=sign(y1)sign(y2), then 2g(y1,y2)y1y2=4δ(y1)δ(y2) where δ() is the Dirac delta function.

Substituting into Price's Theorem, we obtain,

E(sign(y1)sign(y2))ρ=I(ρ)ρ=4δ(y1)δ(y2)p(y1,y2)dy1dy2=2π1ρ2

.

When

ρ=0

,

I(ρ)=0

. Thus

E(sign(y1)sign(y2))=I(ρ)=2π0ρ11ρ2dρ=2πarcsin(ρ)

,

which is Van Vleck's well-known result of "Arcsine law".

[2][3]

Application

This theorem implies that a simplified correlator can be designed.Template:Clarify Instead of having to multiply two signals, the cross-correlation problem reduces to the gatingTemplate:Clarify of one signal with another.Template:Citation needed

References

Template:Reflist

Further reading

  1. 1.0 1.1 J.J. Bussgang,"Cross-correlation function of amplitude-distorted Gaussian signals", Res. Lab. Elec., Mas. Inst. Technol., Cambridge MA, Tech. Rep. 216, March 1952.
  2. 2.0 2.1 2.2 Template:Cite journal
  3. 3.0 3.1 3.2 Template:Cite journal
  4. 4.0 4.1 4.2 Template:Cite journal
  5. 5.0 5.1 Template:Cite book