Kernel (linear algebra)

From testwiki
Jump to navigation Jump to search

Template:Short description Template:Other uses

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain.[1] That is, given a linear map Template:Math between two vector spaces Template:Mvar and Template:Mvar, the kernel of Template:Mvar is the vector space of all elements Template:Math of Template:Mvar such that Template:Math, where Template:Math denotes the zero vector in Template:Mvar,[2] or more symbolically: ker(L)={𝐯VL(𝐯)=𝟎}=L1(𝟎).

Properties

Kernel and image of a linear map Template:Mvar from Template:Mvar to Template:Mvar

The kernel of Template:Mvar is a linear subspace of the domain Template:Mvar.[3][2] In the linear map L:VW, two elements of Template:Mvar have the same image in Template:Mvar if and only if their difference lies in the kernel of Template:Mvar, that is, L(𝐯1)=L(𝐯2) if and only if L(𝐯1𝐯2)=𝟎.

From this, it follows by the first isomorphism theorem that the image of Template:Mvar is isomorphic to the quotient of Template:Mvar by the kernel: im(L)V/ker(L). Template:AnchorIn the case where Template:Mvar is finite-dimensional, this implies the rank–nullity theorem: dim(kerL)+dim(imL)=dim(V). where the term Template:Em refers to the dimension of the image of Template:Mvar, dim(imL), while Template:Em refers to the dimension of the kernel of Template:Mvar, dim(kerL).[4] That is, Rank(L)=dim(imL) and Nullity(L)=dim(kerL), so that the rank–nullity theorem can be restated as Rank(L)+Nullity(L)=dim(domainL).

When Template:Mvar is an inner product space, the quotient V/ker(L) can be identified with the orthogonal complement in Template:Mvar of ker(L). This is the generalization to linear operators of the row space, or coimage, of a matrix.

Generalization to modules

Template:Main The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.

In functional analysis

Template:Main If Template:Mvar and Template:Mvar are topological vector spaces such that Template:Mvar is finite-dimensional, then a linear operator Template:Math is continuous if and only if the kernel of Template:Mvar is a closed subspace of Template:Mvar.

Representation as matrix multiplication

Consider a linear map represented as a Template:Math matrix Template:Mvar with coefficients in a field Template:Mvar (typically or ), that is operating on column vectors Template:Math with Template:Mvar components over Template:Mvar. The kernel of this linear map is the set of solutions to the equation Template:Math, where Template:Math is understood as the zero vector. The dimension of the kernel of A is called the nullity of A. In set-builder notation, N(A)=Null(A)=ker(A)={𝐱KnA𝐱=𝟎}. The matrix equation is equivalent to a homogeneous system of linear equations: A𝐱=𝟎a11x1+a12x2++a1nxn=0a21x1+a22x2++a2nxn=0 am1x1+am2x2++amnxn=0. Thus the kernel of A is the same as the solution set to the above homogeneous equations.

Subspace properties

The kernel of a Template:Math matrix Template:Mvar over a field Template:Mvar is a linear subspace of Template:Math. That is, the kernel of Template:Mvar, the set Template:Math, has the following three properties:

  1. Template:Math always contains the zero vector, since Template:Math.
  2. If Template:Math and Template:Math, then Template:Math. This follows from the distributivity of matrix multiplication over addition.
  3. If Template:Math and Template:Mvar is a scalar Template:Math, then Template:Math, since Template:Math.

The row space of a matrix

Template:Main The product Ax can be written in terms of the dot product of vectors as follows: A𝐱=[𝐚1𝐱𝐚2𝐱𝐚m𝐱].

Here, Template:Math denote the rows of the matrix Template:Mvar. It follows that Template:Math is in the kernel of Template:Mvar, if and only if Template:Math is orthogonal (or perpendicular) to each of the row vectors of Template:Mvar (since orthogonality is defined as having a dot product of 0).

The row space, or coimage, of a matrix Template:Mvar is the span of the row vectors of Template:Mvar. By the above reasoning, the kernel of Template:Mvar is the orthogonal complement to the row space. That is, a vector Template:Math lies in the kernel of Template:Mvar, if and only if it is perpendicular to every vector in the row space of Template:Mvar.

The dimension of the row space of Template:Mvar is called the rank of A, and the dimension of the kernel of Template:Mvar is called the nullity of Template:Mvar. These quantities are related by the rank–nullity theorem[4] rank(A)+nullity(A)=n.

Left null space

The left null space, or cokernel, of a matrix Template:Mvar consists of all column vectors Template:Math such that Template:Math, where T denotes the transpose of a matrix. The left null space of Template:Mvar is the same as the kernel of Template:Math. The left null space of Template:Mvar is the orthogonal complement to the column space of Template:Mvar, and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of Template:Mvar are the four fundamental subspaces associated with the matrix Template:Mvar.

Nonhomogeneous systems of linear equations

The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: A𝐱=𝐛ora11x1+a12x2++a1nxn=b1a21x1+a22x2++a2nxn=b2 am1x1+am2x2++amnxn=bm If Template:Math and Template:Math are two possible solutions to the above equation, then A(𝐮𝐯)=A𝐮A𝐯=𝐛𝐛=𝟎 Thus, the difference of any two solutions to the equation Template:Math lies in the kernel of Template:Mvar.

It follows that any solution to the equation Template:Math can be expressed as the sum of a fixed solution Template:Math and an arbitrary element of the kernel. That is, the solution set to the equation Template:Math is {𝐯+𝐱A𝐯=𝐛𝐱Null(A)}, Geometrically, this says that the solution set to Template:Math is the translation of the kernel of Template:Mvar by the vector Template:Math. See also Fredholm alternative and flat (geometry).

Illustration

The following is a simple illustration of the computation of the kernel of a matrix (see Template:Slink, below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel.

Consider the matrix A=[235423]. The kernel of this matrix consists of all vectors Template:Math for which [235423][xyz]=[00], which can be expressed as a homogeneous system of linear equations involving Template:Mvar, Template:Mvar, and Template:Mvar: 2x+3y+5z=0,4x+2y+3z=0.

The same linear equations can also be written in matrix form as: [23504230].

Through Gauss–Jordan elimination, the matrix can be reduced to: [101/1600113/80].

Rewriting the matrix in equation form yields: x=116zy=138z.

The elements of the kernel can be further expressed in parametric vector form, as follows: [xyz]=c[1/1613/81](where c)

Since Template:Mvar is a free variable ranging over all real numbers, this can be expressed equally well as: [xyz]=c[12616]. The kernel of Template:Mvar is precisely the solution set to these equations (in this case, a line through the origin in Template:Math). Here, the vector Template:Math constitutes a basis of the kernel of Template:Mvar. The nullity of Template:Mvar is therefore 1, as it is spanned by a single vector.

The following dot products are zero: [235][12616]=0and[423][12616]=0, which illustrates that vectors in the kernel of Template:Mvar are orthogonal to each of the row vectors of Template:Mvar.

These two (linearly independent) row vectors span the row space of Template:Mvar—a plane orthogonal to the vector Template:Math.

With the rank 2 of Template:Mvar, the nullity 1 of Template:Mvar, and the dimension 3 of Template:Mvar, we have an illustration of the rank-nullity theorem.

Examples

Computation by Gaussian elimination

A basis of the kernel of a matrix may be computed by Gaussian elimination.

For this purpose, given an Template:Math matrix Template:Mvar, we construct first the row augmented matrix [AI], where Template:Math is the Template:Math identity matrix.

Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix [BC]. A basis of the kernel of Template:Mvar consists in the non-zero columns of Template:Mvar such that the corresponding column of Template:Mvar is a zero column.

In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.

For example, suppose that A=[103028015014000179000000]. Then [AI]=[103028015014000179000000100000010000001000000100000010000001].

Putting the upper part in column echelon form by column operations on the whole matrix gives [BC]=[100000010000001000000000100328010514000100001079000010000001].

The last three columns of Template:Mvar are zero columns. Therefore, the three last vectors of Template:Mvar, [351000],[210710],[840901] are a basis of the kernel of Template:Mvar.

Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that [AI] reduces to [BC] means that there exists an invertible matrix P such that [AI]P=[BC], with B in column echelon form. Thus Template:Nowrap Template:Nowrap and Template:Nowrap A column vector 𝐯 belongs to the kernel of A (that is A𝐯=𝟎) if and only if B𝐰=𝟎, where Template:Nowrap As B is in column echelon form, Template:Nowrap if and only if the nonzero entries of 𝐰 correspond to the zero columns of Template:Nowrap By multiplying by Template:Nowrap one may deduce that this is the case if and only if 𝐯=C𝐰 is a linear combination of the corresponding columns of Template:Nowrap

Numerical computation

The problem of computing the kernel on a computer depends on the nature of the coefficients.

Exact coefficients

If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication).Template:Citation needed

For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.Template:Citation needed

Floating point computation

For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number.[5]Template:Citation needed

Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.Template:Citation needed

See also

Template:Div col

Template:Div col end

Notes and references

Template:Reflist

Bibliography

Template:See also Template:Refbegin

Template:Refend

Template:Wikibooks

Template:Linear algebra

  1. Template:Cite web
  2. 2.0 2.1 Template:Cite web
  3. Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Template:Harvnb, Template:Harvnb, and Strang's lectures.
  4. 4.0 4.1 Template:Cite web
  5. Template:Cite web