Magnus expansion

From testwiki
Revision as of 07:52, 27 May 2024 by 67.198.37.16 (talk) (product integral)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the product integral solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order Template:Mvar with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.

The deterministic case

Magnus approach and its interpretation

Given the Template:Math coefficient matrix Template:Math, one wishes to solve the initial-value problem associated with the linear ordinary differential equation

Y(t)=A(t)Y(t),Y(t0)=Y0

for the unknown Template:Mvar-dimensional vector function Template:Math.

When n = 1, the solution is given as a product integral

Y(t)=exp(t0tA(s)ds)Y0.

This is still valid for n > 1 if the matrix Template:Math satisfies Template:Math for any pair of values of t, t1 and t2. In particular, this is the case if the matrix Template:Mvar is independent of Template:Mvar. In the general case, however, the expression above is no longer the solution of the problem.

The approach introduced by Magnus to solve the matrix initial-value problem is to express the solution by means of the exponential of a certain Template:Math matrix function Template:Math:

Y(t)=exp(Ω(t,t0))Y0,

which is subsequently constructed as a series expansion:

Ω(t)=k=1Ωk(t),

where, for simplicity, it is customary to write Template:Math for Template:Math and to take t0 = 0.

Magnus appreciated that, since Template:Math, using a Poincaré−Hausdorff matrix identity, he could relate the time derivative of Template:Mvar to the generating function of Bernoulli numbers and the adjoint endomorphism of Template:Mvar,

Ω=adΩexp(adΩ)1A,

to solve for Template:Mvar recursively in terms of Template:Mvar "in a continuous analog of the BCH expansion", as outlined in a subsequent section.

The equation above constitutes the Magnus expansion, or Magnus series, for the solution of matrix linear initial-value problem. The first four terms of this series read

Ω1(t)=0tA(t1)dt1,Ω2(t)=120tdt10t1dt2[A(t1),A(t2)],Ω3(t)=160tdt10t1dt20t2dt3([A(t1),[A(t2),A(t3)]]+[A(t3),[A(t2),A(t1)]]),Ω4(t)=1120tdt10t1dt20t2dt30t3dt4([[[A1,A2],A3],A4]+[A1,[[A2,A3],A4]]+[A1,[A2,[A3,A4]]]+[A2,[A3,[A4,A1]]]),

where Template:Math is the matrix commutator of A and B.

These equations may be interpreted as follows: Template:Math coincides exactly with the exponent in the scalar (Template:Mvar = 1) case, but this equation cannot give the whole solution. If one insists in having an exponential representation (Lie group), the exponent needs to be corrected. The rest of the Magnus series provides that correction systematically: Template:Mvar or parts of it are in the Lie algebra of the Lie group on the solution.

In applications, one can rarely sum exactly the Magnus series, and one has to truncate it to get approximate solutions. The main advantage of the Magnus proposal is that the truncated series very often shares important qualitative properties with the exact solution, at variance with other conventional perturbation theories. For instance, in classical mechanics the symplectic character of the time evolution is preserved at every order of approximation. Similarly, the unitary character of the time evolution operator in quantum mechanics is also preserved (in contrast, e.g., to the Dyson series solving the same problem).

Convergence of the expansion

From a mathematical point of view, the convergence problem is the following: given a certain matrix Template:Math, when can the exponent Template:Math be obtained as the sum of the Magnus series?

A sufficient condition for this series to converge for Template:Math is

0TA(s)2ds<π,

where 2 denotes a matrix norm. This result is generic in the sense that one may construct specific matrices Template:Math for which the series diverges for any Template:Math.

Magnus generator

A recursive procedure to generate all the terms in the Magnus expansion utilizes the matrices Template:Math defined recursively through

Sn(j)=m=1nj[Ωm,Snm(j1)],2jn1,
Sn(1)=[Ωn1,A],Sn(n1)=adΩ1n1(A),

which then furnish

Ω1=0tA(τ)dτ,
Ωn=j=1n1Bjj!0tSn(j)(τ)dτ,n2.

Here adkΩ is a shorthand for an iterated commutator (see adjoint endomorphism):

adΩ0A=A,adΩk+1A=[Ω,adΩkA],

while Template:Math are the Bernoulli numbers with Template:Math.

Finally, when this recursion is worked out explicitly, it is possible to express Template:Math as a linear combination of n-fold integrals of n − 1 nested commutators involving Template:Mvar matrices Template:Mvar:

Ωn(t)=j=1n1Bjj!k1++kj=n1k11,,kj10tadΩk1(τ)adΩk2(τ)adΩkj(τ)A(τ)dτ,n2,

which becomes increasingly intricate with Template:Mvar.

The stochastic case

Extension to stochastic ordinary differential equations

For the extension to the stochastic case let (Wt)t[0,T] be a q-dimensional Brownian motion, q>0, on the probability space (Ω,,) with finite time horizon T>0 and natural filtration. Now, consider the linear matrix-valued stochastic Itô differential equation (with Einstein's summation convention over the index Template:Math)

dXt=BtXtdt+At(j)XtdWtj,X0=Id,d>0,

where B,A(1),,A(j) are progressively measurable d×d-valued bounded stochastic processes and Id is the identity matrix. Following the same approach as in the deterministic case with alterations due to the stochastic setting[1] the corresponding matrix logarithm will turn out as an Itô-process, whose first two expansion orders are given by Yt(1)=Yt(1,0)+Yt(0,1) and Yt(2)=Yt(2,0)+Yt(1,1)+Yt(0,2), where with Einstein's summation convention over Template:Math and Template:Math

Yt(0,0)=0,Yt(1,0)=0tAs(j)dWsj,Yt(0,1)=0tBsds,Yt(2,0)=120t(As(j))2ds+120t[As(j),0sAr(i)dWri]dWsj,Yt(1,1)=120t[Bs,0sAr(j)dWr]ds+120t[As(j),0sBrdr]dWsj,Yt(0,2)=120t[Bs,0sBrdr]ds.

Convergence of the expansion

In the stochastic setting the convergence will now be subject to a stopping time τ and a first convergence result is given by:[2]

Under the previous assumption on the coefficients there exists a strong solution X=(Xt)t[0,T], as well as a strictly positive stopping time τT such that:

  1. Xt has a real logarithm Yt up to time τ, i.e.
    Xt=eYt,0t<τ;
  2. the following representation holds -almost surely:
    Yt=n=0Yt(n),0t<τ,
    where Y(n) is the Template:Math-th term in the stochastic Magnus expansion as defined below in the subsection Magnus expansion formula;
  3. there exists a positive constant Template:Math, only dependent on A(1)T,,A(q)T,BT,T,d, with AT=AtFL(Ω×[0,T]), such that
    (τt)Ct,t[0,T].

Magnus expansion formula

The general expansion formula for the stochastic Magnus expansion is given by:

Yt=n=0Yt(n)withYt(n):=r=0nYt(r,nr),

where the general term Y(r,nr) is an Itô-process of the form:

Yt(r,nr)=0tμsr,nrds+0tσsr,nr,jdWsj,n0, r=0,,n,

The terms σr,nr,j,μr,nr are defined recursively as

σsr,nr,j:=i=0n1βii!Ssr1,nr,i(A(j)),μsr,nr:=i=0n1βii!Ssr,nr1,i(B)12j=1qi=0n2βii!q1=2rq2=0nrSrq1,nrq2,i(Qq1,q2,j),

with

Qsq1,q2,j:=i1=2q1i2=0q2h1=1i11h2=0i2p1=0q1i1p2=0q2i2 m1=0p1+p2 m2=0q1i1p1+q2i2p2(Ssp1,p2,m1(σsh1,h2,j)(m1+1)!Ssq1i1p1,q2i2p2,m2(σsi1h1,i2h2,j)(m2+1)!+[Ssp1,p2,m1(σsi1h1,i2h2,j),Ssq1i1p1,q2i2p2,m2(σsh1,h2,j)](m1+m2+2)(m1+1)!m2!),

and with the operators Template:Math being defined as

Ssr1,nr,0(A):={Aif r=n=1,0otherwise,Ssr1,nr,i(A):=(j1,k1),,(ji,ki)02j1++ji=r1k1++ki=nr[Ys(j1,k1),[,[Ys(ji,ki),As]]]=(j1,k1),,(ji,ki)02j1++ji=r1k1+ki=nradYs(j1,k1)adYs(ji,ki)(As),i.

Applications

Since the 1960s, the Magnus expansion has been successfully applied as a perturbative tool in numerous areas of physics and chemistry, from atomic and molecular physics to nuclear magnetic resonance[3] and quantum electrodynamics.[4] It has been also used since 1998 as a tool to construct practical algorithms for the numerical integration of matrix linear differential equations. As they inherit from the Magnus expansion the preservation of qualitative traits of the problem, the corresponding schemes are prototypical examples of geometric numerical integrators.

See also

Notes

Template:Reflist

References

Template:Refbegin

Template:Refend