Block matrix pseudoinverse

From testwiki
Revision as of 01:24, 4 November 2024 by imported>OlliverWithDoubleL (Added short description)
(diff) ← Older revision | Latest revision (diff) | Newer revision β†’ (diff)
Jump to navigation Jump to search

Template:Short description Template:Refimprove In mathematics, a block matrix pseudoinverse is a formula for the pseudoinverse of a partitioned matrix. This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on the least squares method.

Derivation

Consider a column-wise partitioned matrix:

[𝐀𝐁],𝐀ℝm×n,𝐁ℝm×p,mn+p.

If the above matrix is full column rank, the Moore–Penrose inverse matrices of it and its transpose are

[𝐀𝐁]+=([𝐀𝐁]T[𝐀𝐁])1[𝐀𝐁]T,[𝐀T𝐁T]+=[𝐀𝐁]([𝐀𝐁]T[𝐀𝐁])1.

This computation of the pseudoinverse requires (n + p)-square matrix inversion and does not take advantage of the block form.

To reduce computational costs to n- and p-square matrix inversions and to introduce parallelism, treating the blocks separately, one derives [1]

[𝐀𝐁]+=[𝐏B𝐀(𝐀T𝐏B𝐀)1𝐏A𝐁(𝐁T𝐏A𝐁)1]=[(𝐏B𝐀)+(𝐏A𝐁)+],[𝐀T𝐁T]+=[𝐏B𝐀(𝐀T𝐏B𝐀)1,𝐏A𝐁(𝐁T𝐏A𝐁)1]=[(𝐀T𝐏B)+(𝐁T𝐏A)+],

where orthogonal projection matrices are defined by

𝐏A=πˆπ€(𝐀T𝐀)1𝐀T,𝐏B=𝐈𝐁(𝐁T𝐁)1𝐁T.

The above formulas are not necessarily valid if [𝐀𝐁] does not have full rank – for example, if 𝐀0, then

[𝐀𝐀]+=12[𝐀+𝐀+][(𝐏A𝐀)+(𝐏A𝐀)+]=0

Application to least squares problems

Given the same matrices as above, we consider the following least squares problems, which appear as multiple objective optimizations or constrained problems in signal processing. Eventually, we can implement a parallel algorithm for least squares based on the following results.

Column-wise partitioning in over-determined least squares

Suppose a solution 𝐱=[𝐱1𝐱2] solves an over-determined system:

[𝐀,𝐁][𝐱1𝐱2]=𝐝,𝐝ℝm×1.

Using the block matrix pseudoinverse, we have

𝐱=[𝐀,𝐁]+𝐝=[(𝐏B𝐀)+(𝐏A𝐁)+]𝐝.

Therefore, we have a decomposed solution:

𝐱1=(𝐏B𝐀)+𝐝,𝐱2=(𝐏A𝐁)+𝐝.

Row-wise partitioning in under-determined least squares

Suppose a solution 𝐱 solves an under-determined system:

[𝐀T𝐁T]𝐱=[𝐞𝐟],πžβ„n×1,πŸβ„p×1.

The minimum-norm solution is given by

𝐱=[𝐀T𝐁T]+[𝐞𝐟].

Using the block matrix pseudoinverse, we have

𝐱=[(𝐀T𝐏B)+(𝐁T𝐏A)+][𝐞𝐟]=(𝐀T𝐏B)+𝐞+(𝐁T𝐏A)+𝐟.

Comments on matrix inversion

Instead of ([𝐀𝐁]T[𝐀𝐁])1, we need to calculate directly or indirectlyTemplate:Citation neededTemplate:Original research?

(𝐀T𝐀)1,(𝐁T𝐁)1,(𝐀T𝐏B𝐀)1,(𝐁T𝐏A𝐁)1.

In a dense and small system, we can use singular value decomposition, QR decomposition, or Cholesky decomposition to replace the matrix inversions with numerical routines. In a large system, we may employ iterative methods such as Krylov subspace methods.

Considering parallel algorithms, we can compute (𝐀T𝐀)1 and (𝐁T𝐁)1 in parallel. Then, we finish to compute (𝐀T𝐏B𝐀)1 and (𝐁T𝐏A𝐁)1 also in parallel.

See also

References

Template:Reflist

Template:Numerical linear algebra