Estimation of signal parameters via rotational invariance techniques

From testwiki
Revision as of 15:38, 19 February 2025 by imported>TastyColin (β†’One-dimensional ESPRIT: Correction typo : x_k -> x_K)
(diff) ← Older revision | Latest revision (diff) | Newer revision β†’ (diff)
Jump to navigation Jump to search

Template:Short description Template:Multiple issues

Example of separation into subarrays (2D ESPRIT)

Estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation.[1] However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.[2]

One-dimensional ESPRIT

At instance t, the M (complex-valued) output signals (measurements) ym[t], m=1,,M, of the system are related to the K (complex-valued) input signals xk[t], k=1,,K, asym[t]=k=1Kam,kxk[t]+nm[t],where nm[t] denotes the noise added by the system. The one-dimensional form of ESPRIT can be applied if the weights have the form am,k=ej(m1)ωk, whose phases are integer multiples of some radial frequency ωk. This frequency only depends on the index of the system's input, i.e. k. The goal of ESPRIT is to estimate ωk's, given the outputs ym[t] and the number of input signals, K. Since the radial frequencies are the actual objectives, am,k is denoted as am(ωk).

Collating the weights am(ωk) as 𝐚(ωk)=[1 ejωk ej2ωk ... ej(M1)ωk] and the M output signals at instance t as 𝐲[t]=[y1[t] y2[t] ... yM[t]], 𝐲[t]=k=1K𝐚(ωk)xk[t]+𝐧[t],where 𝐧[t]=[n1[t] n2[t] ... nM[t]]. Further, when the weight vectors 𝐚(ω1), πš(ω2), ..., πš(ωK) are put into a Vandermonde matrix 𝐀=[𝐚(ω1) πš(ω2) ... πš(ωK)], and the K inputs at instance t into a vector 𝐱[t]=[x1[t] ... xK[t]], we can write𝐲[t]=𝐀𝐱[t]+𝐧[t].With several measurements at instances t=1,2,,T and the notations 𝐘=[𝐲[1] π²[2]  π²[T]], 𝐗=[𝐱[1] π±[2]  π±[T]] and 𝐍=[𝐧[1] π§[2]  π§[T]], the model equation becomes𝐘=𝐀𝐗+𝐍.

Dividing into virtual sub-arrays

Maximum overlapping of two sub-arrays (N denotes number of sensors in the array, m is the number of sensors in each sub-array, and J1 and J2 are selection matrices)

The weight vector 𝐚(ωk) has the property that adjacent entries are related.[𝐚(ωk)]m+1=ejωk[𝐚(ωk)]mFor the whole vector 𝐚(ωk), the equation introduces two selection matrices 𝐉1and 𝐉2: 𝐉1=[𝐈M1𝟎] and 𝐉2=[𝟎𝐈M1]. Here, 𝐈M1 is an identity matrix of size (M1) and 𝟎 is a vector of zeros.

The vector 𝐉1𝐚(ωk) [𝐉2𝐚(ωk)] contains all elements of 𝐚(ωk) except the last [first] one. Thus, 𝐉2𝐚(ωk)=ejωk𝐉1𝐚(ωk) and𝐉2𝐀=𝐉1𝐀𝐇,where𝐇:=[ejω1ejω2ejωK].The above relation is the first major observation required for ESPRIT. The second major observation concerns the signal subspace that can be computed from the output signals.

Signal subspace

The singular value decomposition (SVD) of 𝐘 is given as𝐘=𝐔Σ𝐕where 𝐔ℂM×M and 𝐕ℂT×T are unitary matrices and Σ is a diagonal matrix of size M×T, that holds the singular values from the largest (top left) in descending order. The operator denotes the complex-conjugate transpose (Hermitian transpose).

Let us assume that TM. Notice that we have K input signals. If there was no noise, there would only be K non-zero singular values. We assume that the K largest singular values stem from these input signals and other singular values are presumed to stem from noise. The matrices in the SVD of 𝐘 can be partitioned into submatrices, where some submatrices correspond to the signal subspace and some correspond to the noise subspace.𝐔=[𝐔S𝐔N],Σ=[ΣS𝟎𝟎𝟎ΣN𝟎],𝐕=[𝐕S𝐕N𝐕0],where 𝐔Sβ„‚M×K and 𝐕Sβ„‚N×K contain the first K columns of 𝐔 and 𝐕, respectively and ΣSβ„‚K×Kis a diagonal matrix comprising the K largest singular values.

Thus, The SVD can be written as𝐘=𝐔SΣS𝐕S+𝐔NΣN𝐕N,where 𝐔S, ⁣𝐕S, and ΣS represent the contribution of the input signal xk[t] to 𝐘. We term 𝐔S the signal subspace. In contrast, 𝐔N, 𝐕N, and ΣN represent the contribution of noise nm[t] to 𝐘.

Hence, from the system model, we can write 𝐀𝐗=𝐔SΣS𝐕S and 𝐍=𝐔NΣN𝐕N. Also, from the former, we can write𝐔S=𝐀𝐅,where 𝐅=𝐗𝐕SΣS1. In the sequel, it is only important that there exists such an invertible matrix 𝐅 and its actual content will not be important.

Note: The signal subspace can also be extracted from the spectral decomposition of the auto-correlation matrix of the measurements, which is estimated as𝐑YY=1Tt=1T𝐲[t]𝐲[t]=1T𝐘𝐘=1T𝐔ΣΣ𝐔=1T𝐔SΣS2𝐔S+1T𝐔NΣN2𝐔N.

Estimation of radial frequencies

We have established two expressions so far: 𝐉2𝐀=𝐉1𝐀𝐇 and 𝐔S=𝐀𝐅. Now, 𝐉2𝐀=𝐉1𝐀𝐇𝐉2(𝐔S𝐅1)=𝐉1(𝐔S𝐅1)𝐇𝐒2=𝐒1𝐏,where 𝐒1=𝐉1𝐔S and 𝐒2=𝐉2𝐔S denote the truncated signal sub spaces, and 𝐏=𝐅1𝐇𝐅.The above equation has the form of an eigenvalue decomposition, and the phases of the eigenvalues in the diagonal matrix 𝐇 are used to estimate the radial frequencies.

Thus, after solving for 𝐏 in the relation 𝐒2=𝐒1𝐏, we would find the eigenvalues λ1, λ2, , λK of 𝐏, where λk=αkejωk, and the radial frequencies ω1, ω2, , ωK are estimated as the phases (argument) of the eigenvalues.

Remark: In general, 𝐒1 is not invertible. One can use the least squares estimate 𝐏=(𝐒1𝐒1)1𝐒1𝐒2. An alternative would be the total least squares estimate.

Algorithm summary

Input: Measurements 𝐘:=[𝐲[1] π²[2]  π²[T]], the number of input signals K (estimate if not already known).

  1. Compute the singular value decomposition (SVD) of 𝐘=𝐔Σ𝐕 and extract the signal subspace 𝐔Sβ„‚M×K as the first K columns of 𝐔.
  2. Compute 𝐒1=𝐉1𝐔S and 𝐒2=𝐉2𝐔S, where 𝐉1=[𝐈M1𝟎] and 𝐉2=[𝟎𝐈M1].
  3. Solve for 𝐏 in 𝐒2=𝐒1𝐏 (see the remark above).
  4. Compute the eigenvalues λ1,λ2,,λK of 𝐏.
  5. The phases of the eigenvalues λk=αkejωk provide the radial frequencies ωk, i.e., ωk=argλk .

Notes

Choice of selection matrices

In the derivation above, the selection matrices 𝐉1=[𝐈M1𝟎] and 𝐉2=[𝟎𝐈M1] were used. However, any appropriate matrices 𝐉1 and 𝐉2 may be used as long as the rotational invariance (i.e., 𝐉2𝐚(ωk)=ejωk𝐉1𝐚(ωk)), or some generalization of it (see below) holds; accordingly, the matrices 𝐉1𝐀 and 𝐉2𝐀 may contain any rows of 𝐀.

Generalized rotational invariance

The rotational invariance used in the derivation may be generalized. So far, the matrix 𝐇 has been defined to be a diagonal matrix that stores the sought-after complex exponentials on its main diagonal. However, 𝐇 may also exhibit some other structure.[3] For instance, it may be an upper triangular matrix. In this case, 𝐏=𝐅1𝐇𝐅constitutes a triangularization of 𝐏.

See also

References

Template:Reflist

Further reading

  1. ↑ Template:Citation
  2. ↑ Volodymyr Vasylyshyn. The direction of arrival estimation using ESPRIT with sparse arrays.// Proc. 2009 European Radar Conference (EuRAD). – 30 Sept.-2 Oct. 2009. - Pp. 246 - 249. - [1]
  3. ↑ Template:Cite journal