Inverse function theorem

From testwiki
Jump to navigation Jump to search

Template:Short description Template:Use dmy dates {{#invoke:sidebar|collapsible | class = plainlist | titlestyle = padding-bottom:0.25em; | pretitle = Part of a series of articles about | title = Calculus | image = abf(t)dt=f(b)f(a) | listtitlestyle = text-align:center; | liststyle = border-top:1px solid #aaa;padding-top:0.15em;border-bottom:1px solid #aaa; | expanded = | abovestyle = padding:0.15em 0.25em 0.3em;font-weight:normal; | above =

Template:Startflatlist

Template:EndflatlistTemplate:Startflatlist

Template:Endflatlist

| list2name = differential | list2titlestyle = display:block;margin-top:0.65em; | list2title = Template:Bigger | list2 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | heading1 = Definitions
 | content1 =
 | heading2 = Concepts
 | content2 =
 | heading3 = Rules and identities
 | content3 =
}}

| list3name = integral | list3title = Template:Bigger | list3 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | content1 =

| heading2 = Definitions

 | content2 =
 | heading3 = Integration by
 | content3 =
}}

| list4name = series | list4title = Template:Bigger | list4 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | content1 =
 | heading2 = Convergence tests
 | content2 =
}}

| list5name = vector | list5title = Template:Bigger | list5 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | content1 =
 | heading2 = Theorems
 | content2 =
}}

| list6name = multivariable | list6title = Template:Bigger | list6 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | heading1 = Formalisms
 | content1 =
 | heading2 = Definitions
 | content2 =
}}

| list7name = advanced | list7title = Template:Bigger | list7 ={{#invoke:sidebar|sidebar|child=yes

 |contentclass=hlist
 | content1 =
}}

| list8name = specialized | list8title = Template:Bigger | list8 =

| list9name = miscellanea | list9title = Template:Bigger | list9 =

}} In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function. In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth.

The theorem was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem.

Statements

For functions of a single variable, the theorem states that if f is a continuously differentiable function with nonzero derivative at the point a; then f is injective (or bijective onto the image) in a neighborhood of a, the inverse is continuously differentiable near b=f(a), and the derivative of the inverse function at b is the reciprocal of the derivative of f at a: (f1)(b)=1f(a)=1f(f1(b)).

It can happen that a function f may be injective near a point a while f(a)=0. An example is f(x)=(xa)3. In fact, for such a function, the inverse cannot be differentiable at b=f(a), since if f1 were differentiable at b, then, by the chain rule, 1=(f1f)(a)=(f1)(b)f(a), which implies f(a)0. (The situation is different for holomorphic functions; see #Holomorphic inverse function theorem below.)

For functions of more than one variable, the theorem states that if f is a continuously differentiable function from an open subset A of n into n, and the derivative f(a) is invertible at a point Template:Mvar (that is, the determinant of the Jacobian matrix of Template:Mvar at Template:Mvar is non-zero), then there exist neighborhoods U of a in A and V of b=f(a) such that f(U)V and f:UV is bijective.[1] Writing f=(f1,,fn), this means that the system of Template:Mvar equations yi=fi(x1,,xn) has a unique solution for x1,,xn in terms of y1,,yn when xU,yV. Note that the theorem does not say f is bijective onto the image where f is invertible but that it is locally bijective where f is invertible.

Moreover, the theorem says that the inverse function f1:VU is continuously differentiable, and its derivative at b=f(a) is the inverse map of f(a); i.e.,

(f1)(b)=f(a)1.

In other words, if Jf1(b),Jf(a) are the Jacobian matrices representing (f1)(b),f(a), this means:

Jf1(b)=Jf(a)1.

The hard part of the theorem is the existence and differentiability of f1. Assuming this, the inverse derivative formula follows from the chain rule applied to f1f=I. (Indeed, 1=I(a)=(f1f)(a)=(f1)(b)f(a).) Since taking the inverse is infinitely differentiable, the formula for the derivative of the inverse shows that if f is continuously k times differentiable, with invertible derivative at the point Template:Mvar, then the inverse is also continuously k times differentiable. Here k is a positive integer or .

There are two variants of the inverse function theorem.[1] Given a continuously differentiable map f:Um, the first is

  • The derivative f(a) is surjective (i.e., the Jacobian matrix representing it has rank m) if and only if there exists a continuously differentiable function g on a neighborhood V of b=f(a) such that fg=I near b,

and the second is

  • The derivative f(a) is injective if and only if there exists a continuously differentiable function g on a neighborhood V of b=f(a) such that gf=I near a.

In the first case (when f(a) is surjective), the point b=f(a) is called a regular value. Since m=dimker(f(a))+dimim(f(a)), the first case is equivalent to saying b=f(a) is not in the image of critical points a (a critical point is a point a such that the kernel of f(a) is nonzero). The statement in the first case is a special case of the submersion theorem.

These variants are restatements of the inverse functions theorem. Indeed, in the first case when f(a) is surjective, we can find an (injective) linear map T such that f(a)T=I. Define h(x)=a+Tx so that we have:

(fh)(0)=f(a)T=I.

Thus, by the inverse function theorem, fh has inverse near 0; i.e., fh(fh)1=I near b. The second case (f(a) is injective) is seen in the similar way.

Example

Consider the vector-valued function F:22 defined by:

F(x,y)=[excosyexsiny].

The Jacobian matrix of it at (x,y) is:

JF(x,y)=[excosyexsinyexsinyexcosy]

with the determinant:

detJF(x,y)=e2xcos2y+e2xsin2y=e2x.

The determinant e2x is nonzero everywhere. Thus the theorem guarantees that, for every point Template:Mvar in 2, there exists a neighborhood about Template:Mvar over which Template:Mvar is invertible. This does not mean Template:Mvar is invertible over its entire domain: in this case Template:Mvar is not even injective since it is periodic: F(x,y)=F(x,y+2π).

Counter-example

The function f(x)=x+2x2sin(1x) is bounded inside a quadratic envelope near the line y=x, so f(0)=1. Nevertheless, it has local max/min points accumulating at x=0, so it is not one-to-one on any surrounding interval.

If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example f(x)=x+2x2sin(1x) and f(0)=0 has discontinuous derivative f(x)=12cos(1x)+4xsin(1x) and f(0)=1, which vanishes arbitrarily close to x=0. These critical points are local max/min points of f, so f is not one-to-one (and not invertible) on any interval containing x=0. Intuitively, the slope f(0)=1 does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation.

Methods of proof

As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the Banach fixed-point theorem (which can also be used as the key step in the proof of existence and uniqueness of solutions to ordinary differential equations).[2][3]

Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem[4] (see Generalizations below).

An alternate proof in finite dimensions hinges on the extreme value theorem for functions on a compact set.[5] This approach has an advantage that the proof generalizes to a situation where there is no Cauchy completeness (see Template:Section link).

Yet another proof uses Newton's method, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible.[6]

Proof for single-variable functions

We want to prove the following: Let D be an open set with x0D,f:D a continuously differentiable function defined on D, and suppose that f(x0)0. Then there exists an open interval I with x0I such that f maps I bijectively onto the open interval J=f(I), and such that the inverse function f1:JI is continuously differentiable, and for any yJ, if xI is such that f(x)=y, then (f1)(y)=1f(x).

We may without loss of generality assume that f(x0)>0. Given that D is an open set and f is continuous at x0, there exists r>0 such that (x0r,x0+r)D and|f(x)f(x0)|<f(x0)2for all |xx0|<r.

In particular,f(x)>f(x0)2>0for all |xx0|<r.

This shows that f is strictly increasing for all |xx0|<r. Let δ>0 be such that δ<r. Then [xδ,x+δ](x0r,x0+r). By the intermediate value theorem, we find that f maps the interval [xδ,x+δ] bijectively onto [f(xδ),f(x+δ)]. Denote by I=(xδ,x+δ) and J=(f(xδ),f(x+δ)). Then f:IJ is a bijection and the inverse f1:JI exists. The fact that f1:JI is differentiable follows from the differentiability of f. In particular, the result follows from the fact that if f:I is a strictly monotonic and continuous function that is differentiable at x0I with f(x0)0, then f1:f(I) is differentiable with (f1)(y0)=1f(y0), where y0=f(x0) (a standard result in analysis). This completes the proof.

A proof using successive approximation

To prove existence, it can be assumed after an affine transformation that f(0)=0 and f(0)=I, so that a=b=0.

By the mean value theorem for vector-valued functions, for a differentiable function u:[0,1]m, u(1)u(0)sup0t1u(t). Setting u(t)=f(x+t(xx))xt(xx), it follows that

f(x)f(x)x+xxxsup0t1f(x+t(xx))I.

Now choose δ>0 so that f(x)I<12 for x<δ. Suppose that y<δ/2 and define xn inductively by x0=0 and xn+1=xn+yf(xn). The assumptions show that if x,x<δ then

f(x)f(x)x+xxx/2.

In particular f(x)=f(x) implies x=x. In the inductive scheme xn<δ and xn+1xn<δ/2n. Thus (xn) is a Cauchy sequence tending to x. By construction f(x)=y as required.

To check that g=f1 is C1, write g(y+k)=x+h so that f(x+h)=f(x)+k. By the inequalities above, hk<h/2 so that h/2<k<2h. On the other hand if A=f(x), then AI<1/2. Using the geometric series for B=IA, it follows that A1<2. But then

g(y+k)g(y)f(g(y))1kk=hf(x)1[f(x+h)f(x)]k4f(x+h)f(x)f(x)hh

tends to 0 as k and h tend to 0, proving that g is C1 with g(y)=f(g(y))1.

The proof above is presented for a finite-dimensional space, but applies equally well for Banach spaces. If an invertible function f is Ck with k>1, then so too is its inverse. This follows by induction using the fact that the map F(A)=A1 on operators is Ck for any k (in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its determinant). [1][7] The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné, Serge Lang, Roger Godement and Lars Hörmander.

A proof using the contraction mapping principle

Here is a proof based on the contraction mapping theorem. Specifically, following T. Tao,[8] it uses the following consequence of the contraction mapping theorem.

Template:Math theorem

Basically, the lemma says that a small perturbation of the identity map by a contraction map is injective and preserves a ball in some sense. Assuming the lemma for a moment, we prove the theorem first. As in the above proof, it is enough to prove the special case when a=0,b=f(a)=0 and f(0)=I. Let g=fI. The mean value inequality applied to tg(x+t(yx)) says:

|g(y)g(x)||yx|sup0<t<1|g(x+t(yx))|.

Since g(0)=II=0 and g is continuous, we can find an r>0 such that

|g(y)g(x)|21|yx|

for all x,y in B(0,r). Then the early lemma says that f=g+I is injective on B(0,r) and B(0,r/2)f(B(0,r)). Then

f:U=B(0,r)f1(B(0,r/2))V=B(0,r/2)

is bijective and thus has an inverse. Next, we show the inverse f1 is continuously differentiable (this part of the argument is the same as that in the previous proof). This time, let g=f1 denote the inverse of f and A=f(x). For x=g(y), we write g(y+k)=x+h or y+k=f(x+h). Now, by the early estimate, we have

|hk|=|f(x+h)f(x)h||h|/2

and so |h|/2|k|. Writing for the operator norm,

|g(y+k)g(y)A1k|=|hA1(f(x+h)f(x))|A1|Ahf(x+h)+f(x)|.

As k0, we have h0 and |h|/|k| is bounded. Hence, g is differentiable at y with the derivative g(y)=f(g(y))1. Also, g is the same as the composition ιfg where ι:TT1; so g is continuous.

It remains to show the lemma. First, we have:

|xy||f(x)f(y)||g(x)g(y)|c|xy|,

which is to say

(1c)|xy||f(x)f(y)|.

This proves the first part. Next, we show f(B(0,r))B(0,(1c)r). The idea is to note that this is equivalent to, given a point y in B(0,(1c)r), find a fixed point of the map

F:B(0,r)B(0,r),xyg(x)

where 0<r<r such that |y|(1c)r and the bar means a closed ball. To find a fixed point, we use the contraction mapping theorem and checking that F is a well-defined strict-contraction mapping is straightforward. Finally, we have: f(B(0,r))B(0,(1+c)r) since

|f(x)|=|x+g(x)g(0)|(1+c)|x|.

As might be clear, this proof is not substantially different from the previous one, as the proof of the contraction mapping theorem is by successive approximation.

Applications

Implicit function theorem

The inverse function theorem can be used to solve a system of equations

f1(x)=y1fn(x)=yn,

i.e., expressing y1,,yn as functions of x=(x1,,xn), provided the Jacobian matrix is invertible. The implicit function theorem allows to solve a more general system of equations:

f1(x,y)=0fn(x,y)=0

for y in terms of x. Though more general, the theorem is actually a consequence of the inverse function theorem. First, the precise statement of the implicit function theorem is as follows:[9]

  • given a map f:n×mm, if f(a,b)=0, f is continuously differentiable in a neighborhood of (a,b) and the derivative of yf(a,y) at b is invertible, then there exists a differentiable map g:UV for some neighborhoods U,V of a,b such that f(x,g(x))=0. Moreover, if f(x,y)=0,xU,yV, then y=g(x); i.e., g(x) is a unique solution.

To see this, consider the map F(x,y)=(x,f(x,y)). By the inverse function theorem, F:U×VW has the inverse G for some neighborhoods U,V,W. We then have:

(x,y)=F(G1(x,y),G2(x,y))=(G1(x,y),f(G1(x,y),G2(x,y))),

implying x=G1(x,y) and y=f(x,G2(x,y)). Thus g(x)=G2(x,0) has the required property.

Giving a manifold structure

In differential geometry, the inverse function theorem is used to show that the pre-image of a regular value under a smooth map is a manifold.[10] Indeed, let f:Ur be such a smooth map from an open subset of n (since the result is local, there is no loss of generality with considering such a map). Fix a point a in f1(b) and then, by permuting the coordinates on n, assume the matrix [fixj(a)]1i,jr has rank r. Then the map F:Ur×nr=n,x(f(x),xr+1,,xn) is such that F(a) has rank n. Hence, by the inverse function theorem, we find the smooth inverse G of F defined in a neighborhood V×W of (b,ar+1,,an). We then have

x=(FG)(x)=(f(G(x)),Gr+1(x),,Gn(x)),

which implies

(fG)(x1,,xn)=(x1,,xr).

That is, after the change of coordinates by G, f is a coordinate projection (this fact is known as the submersion theorem). Moreover, since G:V×WU=G(V×W) is bijective, the map

g=G(b,):Wf1(b)U,(xr+1,,xn)G(b,xr+1,,xn)

is bijective with the smooth inverse. That is to say, g gives a local parametrization of f1(b) around a. Hence, f1(b) is a manifold. (Note the proof is quite similar to the proof of the implicit function theorem and, in fact, the implicit function theorem can be also used instead.)

More generally, the theorem shows that if a smooth map f:PE is transversal to a submanifold ME, then the pre-image f1(M)P is a submanifold.[11]

Global version

The inverse function theorem is a local result; it applies to each point. A priori, the theorem thus only shows the function f is locally bijective (or locally diffeomorphic of some class). The next topological lemma can be used to upgrade local injectivity to injectivity that is global to some extent.

Template:Math theorem

Proof:[12] First assume X is compact. If the conclusion of the theorem is false, we can find two sequences xiyi such that f(xi)=f(yi) and xi,yi each converge to some points x,y in A. Since f is injective on A, x=y. Now, if i is large enough, xi,yi are in a neighborhood of x=y where f is injective; thus, xi=yi, a contradiction.

In general, consider the set E={(x,y)X2xy,f(x)=f(y)}. It is disjoint from S×S for any subset SX where f is injective. Let X1X2 be an increasing sequence of compact subsets with union X and with Xi contained in the interior of Xi+1. Then, by the first part of the proof, for each i, we can find a neighborhood Ui of AXi such that Ui2X2E. Then U=iUi has the required property. (See also [13] for an alternative approach.)

The lemma implies the following (a sort of) global version of the inverse function theorem:

Template:Math theorem

Note that if A is a point, then the above is the usual inverse function theorem.

Holomorphic inverse function theorem

There is a version of the inverse function theorem for holomorphic maps.

Template:Math theorem

The theorem follows from the usual inverse function theorem. Indeed, let J(f) denote the Jacobian matrix of f in variables xi,yi and J(f) for that in zj,zj. Then we have detJ(f)=|detJ(f)|2, which is nonzero by assumption. Hence, by the usual inverse function theorem, f is injective near 0 with continuously differentiable inverse. By chain rule, with w=f(z),

zj(fj1f)(z)=kfj1wk(w)fkzj(z)+kfj1wk(w)fkzj(z)

where the left-hand side and the first term on the right vanish since fj1f and fk are holomorphic. Thus, fj1wk(w)=0 for each k.

Similarly, there is the implicit function theorem for holomorphic functions.[14]

As already noted earlier, it can happen that an injective smooth function has the inverse that is not smooth (e.g., f(x)=x3 in a real variable). This is not the case for holomorphic functions because of: Template:Math theorem

Formulations for manifolds

The inverse function theorem can be rephrased in terms of differentiable maps between differentiable manifolds. In this context the theorem states that for a differentiable map F:MN (of class C1), if the differential of F,

dFp:TpMTF(p)N

is a linear isomorphism at a point p in M then there exists an open neighborhood U of p such that

F|U:UF(U)

is a diffeomorphism. Note that this implies that the connected components of Template:Mvar and Template:Mvar containing p and F(p) have the same dimension, as is already directly implied from the assumption that dFp is an isomorphism. If the derivative of Template:Mvar is an isomorphism at all points Template:Mvar in Template:Mvar then the map Template:Mvar is a local diffeomorphism.

Generalizations

Banach spaces

The inverse function theorem can also be generalized to differentiable maps between Banach spaces Template:Mvar and Template:Mvar.[15] Let Template:Mvar be an open neighbourhood of the origin in Template:Mvar and F:UY a continuously differentiable function, and assume that the Fréchet derivative dF0:XY of Template:Mvar at 0 is a bounded linear isomorphism of Template:Mvar onto Template:Mvar. Then there exists an open neighbourhood Template:Mvar of F(0) in Template:Mvar and a continuously differentiable map G:VX such that F(G(y))=y for all Template:Mvar in Template:Mvar. Moreover, G(y) is the only sufficiently small solution Template:Mvar of the equation F(x)=y.

There is also the inverse function theorem for Banach manifolds.[16]

Constant rank theorem

The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point.[17] Specifically, if F:MN has constant rank near a point pM, then there are open neighborhoods Template:Mvar of Template:Mvar and Template:Mvar of F(p) and there are diffeomorphisms u:TpMU and v:TF(p)NV such that F(U)V and such that the derivative dFp:TpMTF(p)N is equal to v1Fu. That is, Template:Mvar "looks like" its derivative near Template:Mvar. The set of points pM such that the rank is constant in a neighborhood of p is an open dense subset of Template:Mvar; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain.

When the derivative of Template:Mvar is injective (resp. surjective) at a point Template:Mvar, it is also injective (resp. surjective) in a neighborhood of Template:Mvar, and hence the rank of Template:Mvar is constant on that neighborhood, and the constant rank theorem applies.

Polynomial functions

If it is true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials.

Selections

When f:nm with mn, f is k times continuously differentiable, and the Jacobian A=f(x) at a point x is of rank m, the inverse of f may not be unique. However, there exists a local selection function s such that f(s(y))=y for all y in a neighborhood of y=f(x), s(y)=x, s is k times continuously differentiable in this neighborhood, and s(y)=AT(AAT)1 (s(y) is the Moore–Penrose pseudoinverse of A).[18]

Over a real closed field

The inverse function theorem also holds over a real closed field k (or an O-minimal structure).[19] Precisely, the theorem holds for a semialgebraic (or definable) map between open subsets of kn that is continuously differentiable.

The usual proof of the IFT uses Banach's fixed point theorem, which relies on the Cauchy completeness. That part of the argument is replaced by the use of the extreme value theorem, which does not need completeness. Explicitly, in Template:Section link, the Cauchy completeness is used only to establish the inclusion B(0,r/2)f(B(0,r)). Here, we shall directly show B(0,r/4)f(B(0,r)) instead (which is enough). Given a point y in B(0,r/4), consider the function P(x)=|f(x)y|2 defined on a neighborhood of B(0,r). If P(x)=0, then 0=P(x)=2[f1(x)y1fn(x)yn]f(x) and so f(x)=y, since f(x) is invertible. Now, by the extreme value theorem, P admits a minimal at some point x0 on the closed ball B(0,r), which can be shown to lie in B(0,r) using 21|x||f(x)|. Since P(x0)=0, f(x0)=y, which proves the claimed inclusion.

Alternatively, one can deduce the theorem from the one over real numbers by Tarski's principle.Template:Fact

See also

Notes

Template:Reflist

References

Template:Functional analysis Template:Analysis in topological vector spaces

de:Satz von der impliziten Funktion#Satz von der Umkehrabbildung