Linear complementarity problem

From testwiki
Jump to navigation Jump to search

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.Template:SfnpTemplate:SfnpTemplate:Sfnp

Formulation

Given a real matrix M and vector q, the linear complementarity problem LCP(q, M) seeks vectors z and w which satisfy the following constraints:

  • w,z0, (that is, each component of these two vectors is non-negative)
  • zTw=0 or equivalently iwizi=0. This is the complementarity condition, since it implies that, for all i, at most one of wi and zi can be positive.
  • w=Mz+q

A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that Template:Math has a solution for every q, then M is a Q-matrix. If M is such that Template:Math have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary.Template:Sfnp

The vector w is a slack variable,Template:Sfnp and so is generally discarded after z is found. As such, the problem can also be formulated as:

  • Mz+q0
  • z0
  • zT(Mz+q)=0 (the complementarity condition)

Convex quadratic-minimization: Minimum conditions

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function

f(z)=zT(Mz+q)

subject to the constraints

Mz+q0
z0

These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem.

If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice.

Also, a quadratic-programming problem stated as minimize f(x)=cTx+12xTQx subject to Axb as well as x0 with Q symmetric

is the same as solving the LCP with

q=[cb],M=[QATA0]

This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as:

{v=QxATλ+cs=Axbx,λ,v,s0xTv+λTs=0

with v the Lagrange multipliers on the non-negativity constraints, λ the multipliers on the inequality constraints, and s the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables Template:Math with its set of KKT vectors (optimal Lagrange multipliers) being Template:Math. In that case,

z=[xλ],w=[vs]

If the non-negativity constraint on the x is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as Q is non-singular (which is guaranteed if it is positive definite). The multipliers v are no longer present, and the first KKT conditions can be rewritten as:

Qx=ATλc

or:

x=Q1(ATλc)

pre-multiplying the two sides by A and subtracting b we obtain:

Axb=AQ1(ATλc)b

The left side, due to the second KKT condition, is s. Substituting and reordering:

s=(AQ1AT)λ+(AQ1cb)

Calling now

M:=(AQ1AT)q:=(AQ1cb)

we have an LCP, due to the relation of complementarity between the slack variables s and their Lagrange multipliers λ. Once we solve it, we may obtain the value of x from λ through the first KKT condition.

Finally, it is also possible to handle additional equality constraints:

Aeqx=beq

This introduces a vector of Lagrange multipliers μ, with the same dimension as beq.

It is easy to verify that the M and Q for the LCP system s=Mλ+Q are now expressed as:

M:=[A0][QAeqTAeq0]1[AT0]q:=[A0][QAeqTAeq0]1[cbeq]b

From λ we can now recover the values of both x and the Lagrange multiplier of equalities μ:

[xμ]=[QAeqTAeq0]1[ATλcbeq]

In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods.Template:SfnpTemplate:Sfnp LCP problems can be solved also by the criss-cross algorithm,Template:SfnpTemplate:SfnpTemplate:SfnpTemplate:Sfnp conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix.Template:SfnpTemplate:Sfnp A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive.Template:SfnpTemplate:SfnpTemplate:Sfnp Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.Template:SfnpTemplate:SfnpTemplate:Sfnp

See also

Notes

Template:Reflist

References

Further reading

  • LCPSolve — A simple procedure in GAUSS to solve a linear complementarity problem
  • Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs

Template:Mathematical programming