Control-Lyapunov function

From testwiki
Jump to navigation Jump to search

In control theory, a control-Lyapunov function (CLF)[1][2][3][4] is an extension of the idea of Lyapunov function V(x) to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or (more restrictively) asymptotically stable. Lyapunov stability means that if the system starts in a state x0 in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to x=0. A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control u(x,t) such that the system can be brought to the zero state asymptotically by applying the control u.

The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.

Definition

Consider an autonomous dynamical system with inputs Template:NumBlk where xn is the state vector and um is the control vector. Suppose our goal is to drive the system to an equilibrium x*n from every initial state in some domain Dn. Without loss of generality, suppose the equilibrium is at x*=0 (for an equilibrium x*0, it can be translated to the origin by a change of variables).

Definition. A control-Lyapunov function (CLF) is a function V:D that is continuously differentiable, positive-definite (that is, V(x) is positive for all xD except at x=0 where it is zero), and such that for all xn(x0), there exists um such that

V˙(x,u):=V(x),f(x,u)<0,

where u,v denotes the inner product of u,vn.

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem.

Some results apply only to control-affine systems—i.e., control systems in the following form: Template:NumBlk where f:nn and gi:nn for i=1,,m.

Theorems

Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.[5] It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I. Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.[6] Artstein proved that the dynamical system (Template:EquationNote) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

Constructing the Stabilizing Input

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (Template:EquationNote), Sontag's formula (or Sontag's universal formula) gives the feedback law k:nm directly in terms of the derivatives of the CLF.[4]Template:Rp In the special case of a single input system (m=1), Sontag's formula is written as

k(x)={LfV(x)+[LfV(x)]2+[LgV(x)]4LgV(x) if LgV(x)00 if LgV(x)=0

where LfV(x):=V(x),f(x) and LgV(x):=V(x),g(x) are the Lie derivatives of V along f and g, respectively.

For the general nonlinear system (Template:EquationNote), the input u can be found by solving a static non-linear programming problem

u*(x)=argminuV(x)f(x,u)

for each state x.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by

m(1+q2)q¨+bq˙+K0q+K1q3=u

Now given the desired state, qd, and actual state, q, with error, e=qdq, define a function r as

r=e˙+αe

A Control-Lyapunov candidate is then

rV(r):=12r2

which is positive for all r0.

Now taking the time derivative of V

V˙=rr˙
V˙=(e˙+αe)(e¨+αe˙)

The goal is to get the time derivative to be

V˙=κV

which is globally exponentially stable if V is globally positive definite (which it is).

Hence we want the rightmost bracket of V˙,

(e¨+αe˙)=(q¨dq¨+αe˙)

to fulfill the requirement

(q¨dq¨+αe˙)=κ2(e˙+αe)

which upon substitution of the dynamics, q¨, gives

(q¨duK0qK1q3bq˙m(1+q2)+αe˙)=κ2(e˙+αe)

Solving for u yields the control law

u=m(1+q2)(q¨d+αe˙+κ2r)+K0q+K1q3+bq˙

with κ and α, both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected

V˙=κV

which is a linear first order differential equation which has solution

V=V(0)exp(κt)

And hence the error and error rate, remembering that V=12(e˙+αe)2, exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V and solve for e. This is left as an exercise for the reader but the first few steps at the solution are:

rr˙=κ2r2
r˙=κ2r
r=r(0)exp(κ2t)
e˙+αe=(e˙(0)+αe(0))exp(κ2t)

which can then be solved using any linear differential equation methods.

References

Template:Reflist


See also