Adjugate matrix

From testwiki
Jump to navigation Jump to search

Template:Short description In linear algebra, the adjugate or classical adjoint of a square matrix Template:Math, Template:Math, is the transpose of its cofactor matrix.[1][2] It is occasionally known as adjunct matrix,[3][4] or "adjoint",[5] though that normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix:

𝐀adj(𝐀)=det(𝐀)𝐈,

where Template:Math is the identity matrix of the same size as Template:Math. Consequently, the multiplicative inverse of an invertible matrix can be found by dividing its adjugate by its determinant.

Definition

The adjugate of Template:Math is the transpose of the cofactor matrix Template:Math of Template:Math,

adj(𝐀)=𝐂𝖳.

In more detail, suppose Template:Math is a unital+ commutative ring and Template:Math is an Template:Math matrix with entries from Template:Math. The Template:Math-minor of Template:Math, denoted Template:Math, is the determinant of the Template:Math matrix that results from deleting row Template:Mvar and column Template:Mvar of Template:Math. The cofactor matrix of Template:Math is the Template:Math matrix Template:Math whose Template:Math entry is the Template:Math cofactor of Template:Math, which is the Template:Math-minor times a sign factor:

𝐂=((1)i+j𝐌ij)1i,jn.

The adjugate of Template:Math is the transpose of Template:Math, that is, the Template:Math matrix whose Template:Math entry is the Template:Math cofactor of Template:Math,

adj(𝐀)=𝐂𝖳=((1)i+j𝐌ji)1i,jn.

Important consequence

The adjugate is defined so that the product of Template:Math with its adjugate yields a diagonal matrix whose diagonal entries are the determinant Template:Math. That is,

𝐀adj(𝐀)=adj(𝐀)𝐀=det(𝐀)𝐈,

where Template:Math is the Template:Math identity matrix. This is a consequence of the Laplace expansion of the determinant.

The above formula implies one of the fundamental results in matrix algebra, that Template:Math is invertible if and only if Template:Math is an invertible element of Template:Math. When this holds, the equation above yields

adj(𝐀)=det(𝐀)𝐀1,𝐀1=det(𝐀)1adj(𝐀).

Examples

1 Γ— 1 generic matrix

Since the determinant of a 0 Γ— 0 matrix is 1, the adjugate of any 1 Γ— 1 matrix (complex scalar) is 𝐈=[1]. Observe that 𝐀adj(𝐀)=adj(𝐀)𝐀=(det𝐀)𝐈.

2 Γ— 2 generic matrix

The adjugate of the 2 Γ— 2 matrix

𝐀=[abcd]

is

adj(𝐀)=[dbca].

By direct computation,

𝐀adj(𝐀)=[adbc00adbc]=(det𝐀)𝐈.

In this case, it is also true that Template:Math(Template:Math(A)) = Template:Math(A) and hence that Template:Math(Template:Math(A)) = A.

3 Γ— 3 generic matrix

Consider a 3 Γ— 3 matrix

𝐀=[a1a2a3b1b2b3c1c2c3].

Its cofactor matrix is

𝐂=[+|b2b3c2c3||b1b3c1c3|+|b1b2c1c2||a2a3c2c3|+|a1a3c1c3||a1a2c1c2|+|a2a3b2b3||a1a3b1b3|+|a1a2b1b2|],

where

|abcd|=det[abcd].

Its adjugate is the transpose of its cofactor matrix,

adj(𝐀)=𝐂𝖳=[+|b2b3c2c3||a2a3c2c3|+|a2a3b2b3||b1b3c1c3|+|a1a3c1c3||a1a3b1b3|+|b1b2c1c2||a1a2c1c2|+|a1a2b1b2|].

3 Γ— 3 numeric matrix

As a specific example, we have

adj[325102341]=[81845121462].

It is easy to check the adjugate is the inverse times the determinant, Template:Math.

The Template:Math in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,

[3512].

The (3,2) cofactor is a sign times the determinant of this submatrix:

(1)3+2det[3512]=(3251)=1,

and this is the (2,3) entry of the adjugate.

Properties

For any Template:Math matrix Template:Math, elementary computations show that adjugates have the following properties:

Over the complex numbers,

Suppose that Template:Math is another Template:Math matrix. Then

adj(𝐀𝐁)=adj(𝐁)adj(𝐀).

This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices Template:Math and Template:Math,

adj(𝐁)adj(𝐀)=(det𝐁)𝐁1(det𝐀)𝐀1=(det𝐀𝐁)(𝐀𝐁)1=adj(𝐀𝐁).

Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of Template:Math or Template:Math is not invertible.

A corollary of the previous formula is that, for any non-negative integer Template:Mvar,

adj(𝐀k)=adj(𝐀)k.

If Template:Math is invertible, then the above formula also holds for negative Template:Mvar.

From the identity

(𝐀+𝐁)adj(𝐀+𝐁)𝐁=det(𝐀+𝐁)𝐁=𝐁adj(𝐀+𝐁)(𝐀+𝐁),

we deduce

𝐀adj(𝐀+𝐁)𝐁=𝐁adj(𝐀+𝐁)𝐀.

Suppose that Template:Math commutes with Template:Math. Multiplying the identity Template:Math on the left and right by Template:Math proves that

det(𝐀)adj(𝐀)𝐁=det(𝐀)𝐁adj(𝐀).

If Template:Math is invertible, this implies that Template:Math also commutes with Template:Math. Over the real or complex numbers, continuity implies that Template:Math commutes with Template:Math even when Template:Math is not invertible.

Finally, there is a more general proof than the second proof, which only requires that an n Γ— n matrix has entries over a field with at least 2n + 1 elements (e.g. a 5 Γ— 5 matrix over the integers modulo 11). Template:Math is a polynomial in t with degree at most n, so it has at most n roots. Note that the ij th entry of Template:Math is a polynomial of at most order n, and likewise for Template:Math. These two polynomials at the ij th entry agree on at least n + 1 points, as we have at least n + 1 elements of the field where Template:Math is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n + 1 points must be identical (subtract them from each other and you have n + 1 roots for a polynomial of degree at most n – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.

Using the above properties and other elementary computations, it is straightforward to show that if Template:Math has one of the following properties, then Template:Math does as well:

If Template:Math is skew-symmetric, then Template:Math is skew-symmetric for even n and symmetric for odd n. Similarly, if Template:Math is skew-Hermitian, then Template:Math is skew-Hermitian for even n and Hermitian for odd n.

If Template:Math is invertible, then, as noted above, there is a formula for Template:Math in terms of the determinant and inverse of Template:Math. When Template:Math is not invertible, the adjugate satisfies different but closely related formulas.

Column substitution and Cramer's rule

Template:See also

Partition Template:Math into column vectors:

𝐀=[𝐚1𝐚n].

Let Template:Math be a column vector of size Template:Math. Fix Template:Math and consider the matrix formed by replacing column Template:Math of Template:Math by Template:Math:

(𝐀i𝐛) =def [𝐚1𝐚i1π›πši+1𝐚n].

Laplace expand the determinant of this matrix along column Template:Mvar. The result is entry Template:Mvar of the product Template:Math. Collecting these determinants for the different possible Template:Mvar yields an equality of column vectors

(det(𝐀i𝐛))i=1n=adj(𝐀)𝐛.

This formula has the following concrete consequence. Consider the linear system of equations

𝐀𝐱=𝐛.

Assume that Template:Math is non-singular. Multiplying this system on the left by Template:Math and dividing by the determinant yields

𝐱=adj(𝐀)𝐛det𝐀.

Applying the previous formula to this situation yields Cramer's rule,

xi=det(𝐀i𝐛)det𝐀,

where Template:Math is the Template:Mvarth entry of Template:Math.

Characteristic polynomial

Let the characteristic polynomial of Template:Math be

p(s)=det(sπˆπ€)=i=0npisiR[s].

The first divided difference of Template:Math is a symmetric polynomial of degree Template:Math,

Δp(s,t)=p(s)p(t)st=0j+k<npj+k+1sjtkR[s,t].

Multiply Template:Math by its adjugate. Since Template:Math by the Cayley–Hamilton theorem, some elementary manipulations reveal

adj(sπˆπ€)=Δp(s𝐈,𝐀).

In particular, the resolvent of Template:Math is defined to be

R(z;𝐀)=(zπˆπ€)1,

and by the above formula, this is equal to

R(z;𝐀)=Δp(z𝐈,𝐀)p(z).

Jacobi's formula

Template:Main The adjugate also appears in Jacobi's formula for the derivative of the determinant. If Template:Math is continuously differentiable, then

d(det𝐀)dt(t)=tr(adj(𝐀(t))𝐀(t)).

It follows that the total derivative of the determinant is the transpose of the adjugate:

d(det𝐀)𝐀0=adj(𝐀0)𝖳.

Cayley–Hamilton formula

Template:Main Let Template:Math be the characteristic polynomial of Template:Math. The Cayley–Hamilton theorem states that

p𝐀(𝐀)=𝟎.

Separating the constant term and multiplying the equation by Template:Math gives an expression for the adjugate that depends only on Template:Math and the coefficients of Template:Math. These coefficients can be explicitly represented in terms of traces of powers of Template:Math using complete exponential Bell polynomials. The resulting formula is

adj(𝐀)=s=0n1𝐀sk1,k2,,kn1=1n1(1)k+1kk!tr(𝐀)k,

where Template:Mvar is the dimension of Template:Math, and the sum is taken over Template:Mvar and all sequences of Template:Math satisfying the linear Diophantine equation

s+=1n1k=n1.

For the 2 Γ— 2 case, this gives

adj(𝐀)=𝐈2(tr𝐀)𝐀.

For the 3 Γ— 3 case, this gives

adj(𝐀)=12𝐈3((tr𝐀)2tr𝐀2)𝐀(tr𝐀)+𝐀2.

For the 4 Γ— 4 case, this gives

adj(𝐀)=16𝐈4((tr𝐀)33tr𝐀tr𝐀2+2tr𝐀3)12𝐀((tr𝐀)2tr𝐀2)+𝐀2(tr𝐀)𝐀3.

The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of Template:Math.

In general, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.

(adj(𝐀))iNjN=1(N1)!ϵi1i2iNϵj1j2jNAj1i1Aj2i2AjN1iN1

Relation to exterior algebras

The adjugate can be viewed in abstract terms using exterior algebras. Let Template:Math be an Template:Math-dimensional vector space. The exterior product defines a bilinear pairing V×n1VnV. Abstractly, nV is isomorphic to Template:Math, and under any such isomorphism the exterior product is a perfect pairing. That is, it yields an isomorphism ϕ:V β†’ Hom(n1V,nV). This isomorphism sends each Template:Math to the map ϕ𝐯 defined by ϕ𝐯(α)=𝐯α. Suppose that Template:Math is a linear transformation. Pullback by the Template:Mathth exterior power of Template:Math induces a morphism of Template:Math spaces. The adjugate of Template:Math is the composite V β†’ϕ Hom(n1V,nV) β†’(n1T)* Hom(n1V,nV) β†’ϕ1 V.

If Template:Math is endowed with its canonical basis Template:Math, and if the matrix of Template:Math in this basis is Template:Math, then the adjugate of Template:Math is the adjugate of Template:Math. To see why, give n1𝐑n the basis {𝐞1𝐞^k𝐞n}k=1n. Fix a basis vector Template:Math of Template:Math. The image of Template:Math under ϕ is determined by where it sends basis vectors: ϕ𝐞i(𝐞1𝐞^k𝐞n)={(1)i1𝐞1𝐞n,if k=i,0otherwise. On basis vectors, the Template:Mathst exterior power of Template:Math is 𝐞1𝐞^j𝐞nk=1n(detAjk)𝐞1𝐞^k𝐞n. Each of these terms maps to zero under ϕ𝐞i except the Template:Math term. Therefore, the pullback of ϕ𝐞i is the linear transformation for which 𝐞1𝐞^j𝐞n(1)i1(detAji)𝐞1𝐞n. That is, it equals j=1n(1)i+j(detAji)ϕ𝐞j. Applying the inverse of ϕ shows that the adjugate of Template:Math is the linear transformation for which 𝐞ij=1n(1)i+j(detAji)𝐞j. Consequently, its matrix representation is the adjugate of Template:Math.

If Template:Math is endowed with an inner product and a volume form, then the map Template:Math can be decomposed further. In this case, Template:Math can be understood as the composite of the Hodge star operator and dualization. Specifically, if Template:Math is the volume form, then it, together with the inner product, determines an isomorphism ω:nV𝐑. This induces an isomorphism Hom(n1𝐑n,n𝐑n)n1(𝐑n). A vector Template:Math in Template:Math corresponds to the linear functional (αω(𝐯α))n1(𝐑n). By the definition of the Hodge star operator, this linear functional is dual to Template:Math. That is, Template:Math equals Template:Math.

Higher adjugates

Let Template:Math be an Template:Math matrix, and fix Template:Math. The Template:Mathth higher adjugate of Template:Math is an (nr)×(nr) matrix, denoted Template:Math, whose entries are indexed by size Template:Math subsets Template:Math and Template:Math of Template:Math Template:Citation needed. Let Template:Math and Template:Math denote the complements of Template:Math and Template:Math, respectively. Also let 𝐀Ic,Jc denote the submatrix of Template:Math containing those rows and columns whose indices are in Template:Math and Template:Math, respectively. Then the Template:Math entry of Template:Math is

(1)σ(I)+σ(J)det𝐀Jc,Ic,

where Template:Math and Template:Math are the sum of the elements of Template:Math and Template:Math, respectively.

Basic properties of higher adjugates include Template:Citation needed:

Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting rV and nrV for V and n1V, respectively.

Iterated adjugates

Iteratively taking the adjugate of an invertible matrix A Template:Mvar times yields

adjadjk(𝐀)=det(𝐀)(n1)k(1)kn𝐀(1)k,
det(adjadjk(𝐀))=det(𝐀)(n1)k.

For example,

adj(adj(𝐀))=det(𝐀)n2𝐀.
det(adj(adj(𝐀)))=det(𝐀)(n1)2.

See also

References

Template:Reflist

Bibliography

  • Roger A. Horn and Charles R. Johnson (2013), Matrix Analysis, Second Edition. Cambridge University Press, Template:ISBN
  • Roger A. Horn and Charles R. Johnson (1991), Topics in Matrix Analysis. Cambridge University Press, Template:ISBN

Template:Matrix classes