Linear differential equation

From testwiki
Jump to navigation Jump to search

Template:Short description Template:About Template:Differential equations

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form a0(x)y+a1(x)y+a2(x)y+an(x)y(n)=b(x) where Template:Nowrap and Template:Math are arbitrary differentiable functions that do not need to be linear, and Template:Math are the successive derivatives of an unknown function Template:Mvar of the variable Template:Mvar.

Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives.

Types of solution

A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.

The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.

Basic terminology

The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term Template:Math, which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the Template:Visible anchor. A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation.

A Template:Visible anchor of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.

Linear differential operator

Template:Main A basic differential operator of order Template:Mvar is a mapping that maps any differentiable function to its [[higher derivative|Template:Mvarth derivative]], or, in the case of several variables, to one of its partial derivatives of order Template:Mvar. It is commonly denoted didxi in the case of univariate functions, and i1++inx1i1xnin in the case of functions of Template:Mvar variables. The basic differential operators include the derivative of order 0, which is the identity mapping.

A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form[1] a0(x)+a1(x)ddx++an(x)dndxn, where Template:Math are differentiable functions, and the nonnegative integer Template:Mvar is the order of the operator (if Template:Math is not the zero function).

Let Template:Mvar be a linear differential operator. The application of Template:Mvar to a function Template:Mvar is usually denoted Template:Math or Template:Math, if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar.

As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions.

The language of operators allows a compact writing for differentiable equations: if L=a0(x)+a1(x)ddx++an(x)dndxn, is a linear differential operator, then the equation a0(x)y+a1(x)y+a2(x)y++an(x)y(n)=b(x) may be rewritten Ly=b(x).

There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in Template:Mvar and the right-hand and of the equation, such as Template:Math or Template:Math.

The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation Template:Math.

In the case of an ordinary differential operator of order Template:Mvar, Carathéodory's existence theorem implies that, under very mild conditions, the kernel of Template:Mvar is a vector space of dimension Template:Mvar, and that the solutions of the equation Template:Math have the form S0(x)+c1S1(x)++cnSn(x), where Template:Math are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval Template:Mvar, if the functions Template:Math are continuous in Template:Mvar, and there is a positive real number Template:Mvar such that Template:Math for every Template:Mvar in Template:Mvar.

Homogeneous equation with constant coefficients

A homogeneous linear differential equation has constant coefficients if it has the form a0y+a1y+a2y++any(n)=0 where Template:Math are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.

The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function Template:Math, which is the unique solution of the equation Template:Math such that Template:Math. It follows that the Template:Mvarth derivative of Template:Math is Template:Math, and this allows solving homogeneous linear differential equations rather easily.

Let a0y+a1y+a2y++any(n)=0 be a homogeneous linear differential equation with constant coefficients (that is Template:Math are real or complex numbers).

Searching solutions of this equation that have the form Template:Math is equivalent to searching the constants Template:Mvar such that a0eαx+a1αeαx+a2α2eαx++anαneαx=0. Factoring out Template:Math (which is never zero), shows that Template:Mvar must be a root of the characteristic polynomial a0+a1t+a2t2++antn of the differential equation, which is the left-hand side of the characteristic equation a0+a1t+a2t2++antn=0.

When these roots are all distinct, one has Template:Mvar distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at Template:Math. Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator).

Example

y2y+2y2y+y=0 has the characteristic equation z42z3+2z22z+1=0. This has zeros, Template:Mvar, Template:Math, and Template:Math (multiplicity 2). The solution basis is thus eix,eix,ex,xex. A real basis of solution is thus cosx,sinx,ex,xex.

In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form xkeαx, where Template:Mvar is a nonnegative integer, Template:Mvar is a root of the characteristic polynomial of multiplicity Template:Mvar, and Template:Math. For proving that these functions are solutions, one may remark that if Template:Mvar is a root of the characteristic polynomial of multiplicity Template:Mvar, the characteristic polynomial may be factored as Template:Math. Thus, applying the differential operator of the equation is equivalent with applying first Template:Mvar times the operator Template:Nowrap and then the operator that has Template:Mvar as characteristic polynomial. By the exponential shift theorem, (ddxα)(xkeαx)=kxk1eαx,

and thus one gets zero after Template:Math application of Template:Nowrap

As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions.

In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if Template:Math is a root of the characteristic polynomial, then Template:Math is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing xke(a+ib)x and xke(aib)x by xkeaxcos(bx) and xkeaxsin(bx).

Second-order case

A homogeneous linear differential equation of the second order may be written y+ay+by=0, and its characteristic polynomial is r2+ar+b.

If Template:Mvar and Template:Mvar are real, there are three cases for the solutions, depending on the discriminant Template:Math. In all three cases, the general solution depends on two arbitrary constants Template:Math and Template:Math.

Finding the solution Template:Math satisfying Template:Math and Template:Math, one equates the values of the above general solution at Template:Math and its derivative there to Template:Math and Template:Math, respectively. This results in a linear system of two linear equations in the two unknowns Template:Math and Template:Math. Solving this system gives the solution for a so-called Cauchy problem, in which the values at Template:Math for the solution of the DEQ and its derivative are specified.

Non-homogeneous equation with constant coefficients

A non-homogeneous equation of order Template:Mvar with constant coefficients may be written y(n)(x)+a1y(n1)(x)++an1y(x)+any(x)=f(x), where Template:Math are real or complex numbers, Template:Mvar is a given function of Template:Mvar, and Template:Mvar is the unknown function (for sake of simplicity, "Template:Math" will be omitted in the following).

There are several methods for solving such an equation. The best method depends on the nature of the function Template:Mvar that makes the equation non-homogeneous. If Template:Mvar is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, Template:Mvar is a linear combination of functions of the form Template:Math, Template:Math, and Template:Math, where Template:Mvar is a nonnegative integer, and Template:Mvar a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when Template:Mvar satisfies a homogeneous linear differential equation, typically, a holonomic function.

The most general method is the variation of constants, which is presented here.

The general solution of the associated homogeneous equation y(n)+a1y(n1)++an1y+any=0 is y=u1y1++unyn, where Template:Math is a basis of the vector space of the solutions and Template:Math are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering Template:Math as constants, they can be considered as unknown functions that have to be determined for making Template:Mvar a solution of the non-homogeneous equation. For this purpose, one adds the constraints 0=u'1y1+u'2y2++u'nyn0=u'1y'1+u'2y'2++u'ny'n0=u'1y1(n2)+u'2y2(n2)++u'nyn(n2), which imply (by product rule and induction) y(i)=u1y1(i)++unyn(i) for Template:Math, and y(n)=u1y1(n)++unyn(n)+u'1y1(n1)+u'2y2(n1)++u'nyn(n1).

Replacing in the original equation Template:Mvar and its derivatives by these expressions, and using the fact that Template:Math are solutions of the original homogeneous equation, one gets f=u'1y1(n1)++u'nyn(n1).

This equation and the above ones with Template:Math as left-hand side form a system of Template:Mvar linear equations in Template:Math whose coefficients are known functions (Template:Mvar, the Template:Math, and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives Template:Math, and then Template:Math.

As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.

First-order equation with variable coefficients

The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of Template:Math, is: y(x)=f(x)y(x)+g(x).

If the equation is homogeneous, i.e. Template:Math, one may rewrite and integrate: yy=f,logy=k+F, where Template:Mvar is an arbitrary constant of integration and F=fdx is any antiderivative of Template:Mvar. Thus, the general solution of the homogeneous equation is y=ceF, where Template:Math is an arbitrary constant.

For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the reciprocal Template:Math of a solution of the homogeneous equation.[2] This gives yeFyfeF=geF. As Template:Tmath the product rule allows rewriting the equation as ddx(yeF)=geF. Thus, the general solution is y=ceF+eFgeFdx, where Template:Mvar is a constant of integration, and Template:Mvar is any antiderivative of Template:Mvar (changing of antiderivative amounts to change the constant of integration).

Example

Solving the equation y(x)+y(x)x=3x. The associated homogeneous equation y(x)+y(x)x=0 gives yy=1x, that is y=cx.

Dividing the original equation by one of these solutions gives xy+y=3x2. That is (xy)=3x2, xy=x3+c, and y(x)=x2+c/x. For the initial condition y(1)=α, one gets the particular solution y(x)=x2+α1x.

System of linear differential equations

Template:Main Template:See also A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.

An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if Template:Tmath appear in an equation, one may replace them by new unknown functions Template:Tmath that must satisfy the equations Template:Tmath and Template:Tmath for Template:Math.

A linear system of the first order, which has Template:Mvar unknown functions and Template:Mvar differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form y1(x)=b1(x)+a1,1(x)y1++a1,n(x)ynyn(x)=bn(x)+an,1(x)y1++an,n(x)yn, where Template:Tmath and the Template:Tmath are functions of Template:Mvar. In matrix notation, this system may be written (omitting "Template:Math") 𝐲=A𝐲+𝐛.

The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

Let 𝐮=A𝐮. be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension Template:Mvar, and are therefore the columns of a square matrix of functions Template:Tmath, whose determinant is not the zero function. If Template:Math, or Template:Mvar is a matrix of constants, or, more generally, if Template:Mvar commutes with its antiderivative Template:Tmath, then one may choose Template:Mvar equal the exponential of Template:Mvar. In fact, in these cases, one has ddxexp(B)=Aexp(B). In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion.

Knowing the matrix Template:Mvar, the general solution of the non-homogeneous equation is 𝐲(x)=U(x)𝐲𝟎+U(x)U1(x)𝐛(x)dx, where the column matrix 𝐲𝟎 is an arbitrary constant of integration.

If initial conditions are given as 𝐲(x0)=𝐲0, the solution that satisfies these initial conditions is 𝐲(x)=U(x)U1(x0)𝐲𝟎+U(x)x0xU1(t)𝐛(t)dt.

Higher order with variable coefficients

A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory.

The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory.

Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.

Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm.

Cauchy–Euler equation

Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form xny(n)(x)+an1xn1y(n1)(x)++a0y(x)=0, where Template:Tmath are constant coefficients.

Holonomic functions

Template:Main A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.

Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions.

Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.[3]

Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.[3]

A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa. [3]

It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.[4]

See also

References

Template:Reflist

Template:Differential equations topics Template:Authority control

  1. Gershenfeld 1999, p.9
  2. Motivation: In analogy to completing the square technique we write the equation as Template:Math, and try to modify the left side so it becomes a derivative. Specifically, we seek an "integrating factor" Template:Math such that multiplying by it makes the left side equal to the derivative of Template:Math, namely Template:Math. This means Template:Math, so that Template:Math, as in the text.
  3. 3.0 3.1 3.2 Zeilberger, Doron. A holonomic systems approach to special functions identities. Journal of computational and applied mathematics. 32.3 (1990): 321-368
  4. Benoit, A., Chyzak, F., Darrasse, A., Gerhold, S., Mezzarobba, M., & Salvy, B. (2010, September). The dynamic dictionary of mathematical functions (DDMF). In International Congress on Mathematical Software (pp. 35-41). Springer, Berlin, Heidelberg.