Non-negative least squares

From testwiki
Revision as of 18:14, 19 February 2025 by imported>WikiCleanerBot (v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Regression bar In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix Template:Math and a (column) vector of response variables Template:Math, the goal is to find[1]

argmin𝐱𝐀𝐱𝐲22 subject to Template:Math.

Here Template:Math means that each component of the vector Template:Math should be non-negative, and Template:Math denotes the Euclidean norm.

Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC[2] and non-negative matrix/tensor factorization.[3][4] The latter can be considered a generalization of NNLS.[1]

Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds Template:Math.Template:RTemplate:Rp[5]

Quadratic programming version

The NNLS problem is equivalent to a quadratic programming problem

argmin𝐱𝟎(12𝐱𝖳𝐐𝐱+𝐜𝖳𝐱),

where Template:Math = Template:Math and Template:Math = Template:Math. This problem is convex, as Template:Math is positive semidefinite and the non-negativity constraints form a convex feasible set.[6]

Algorithms

The first widely used algorithm for solving this problem is an active-set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems.[7]Template:Rp In pseudocode, this algorithm looks as follows:Template:R[2]

Template:Framebox

Template:Frame-footer

This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at a reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse Template:Math.Template:R Variants of this algorithm are available in MATLAB as the routine Template:Mono[8][1] and in SciPy as Template:Mono.[9]

Many improved algorithms have been suggested since 1974.Template:R Fast NNLS (FNNLS) is an optimized version of the Lawson–Hanson algorithm.Template:R Other algorithms include variants of Landweber's gradient descent method,[10] coordinate-wise optimization based on the quadratic programming problem aboveTemplate:R, and an active set method called TNT-NN.[11]

See also

References

Template:Reflist