Descent direction

From testwiki
Revision as of 18:40, 18 January 2025 by imported>Citation bot (Added isbn. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Mathematical optimization | #UCB_Category 109/126)
(diff) ← Older revision | Latest revision (diff) | Newer revision β†’ (diff)
Jump to navigation Jump to search

In optimization, a descent direction is a vector 𝐩ℝn that points towards a local minimum 𝐱* of an objective function f:ℝnℝ.

Computing 𝐱* by an iterative method, such as line search defines a descent direction 𝐩kℝn at the kth iterate to be any 𝐩k such that 𝐩k,f(𝐱k)<0, where , denotes the inner product. The motivation for such an approach is that small steps along 𝐩k guarantee that f is reduced, by Taylor's theorem.

Using this definition, the negative of a non-zero gradient is always a descent direction, as f(𝐱k),f(𝐱k)=f(𝐱k),f(𝐱k)<0.

Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method.

More generally, if P is a positive definite matrix, then pk=Pf(xk) is a descent direction at xk.[1] This generality is used in preconditioned gradient descent methods.

See also

References

Template:Reflist