Regularization perspectives on support vector machines

From testwiki
Revision as of 08:02, 6 June 2024 by imported>David Eppstein (Theoretical background: fix ref)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Within mathematical analysis, Regularization perspectives on support-vector machines provide a way of interpreting support-vector machines (SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting the training set data in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoids overfitting via Tikhonov regularization and in the L2 norm sense and also corresponds to minimizing the bias and variance of our estimator of the weights. Estimators with lower Mean squared error predict better or generalize better when given unseen data.

Specifically, Tikhonov regularization algorithms produce a decision boundary that minimizes the average training-set error and constrain the Decision boundary not to be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics.

Regularization perspectives on support-vector machines interpret SVM as a special case of Tikhonov regularization, specifically Tikhonov regularization with the hinge loss for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to generalize without overfitting. SVM was first proposed in 1995 by Corinna Cortes and Vladimir Vapnik, and framed geometrically as a method for finding hyperplanes that can separate multidimensional data into two categories.[1] This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other machine-learning techniques for avoiding overfitting, like regularization, early stopping, sparsity and Bayesian inference. However, once it was discovered that SVM is also a special case of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms.[2][3][4] This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss.[5]

Theoretical background

In the statistical learning theory framework, an algorithm is a strategy for choosing a function f:𝐗𝐘 given a training set S={(x1,y1),,(xn,yn)} of inputs xi and their labels yi (the labels are usually ±1). Regularization strategies avoid overfitting by choosing a function that fits the data, but is not too complex. Specifically:

f=argminf{1ni=1nV(yi,f(xi))+λf2},

where is a hypothesis space[6] of functions, V:𝐘×𝐘 is the loss function, is a norm on the hypothesis space of functions, and λ is the regularization parameter.[7]

When is a reproducing kernel Hilbert space, there exists a kernel function K:𝐗×𝐗 that can be written as an n×n symmetric positive-definite matrix 𝐊. By the representer theorem,[8]

f(xi)=j=1ncj𝐊ij, and f2=f,f=i=1nj=1ncicjK(xi,xj)=cT𝐊c.

Special properties of the hinge loss

Hinge and misclassification loss functions

The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 if f(xi)=yi and 1 if f(xi)yi, i.e. the Heaviside step function on yif(xi). However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss. The hinge loss, V(yi,f(xi))=(1yf(x))+, where (s)+=max(s,0), provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0–1 misclassification loss function,[4] and with infinite data returns the Bayes-optimal solution:[5][9]

fb(x)={1,p(1x)>p(1x),1,p(1x)<p(1x).

Derivation

The Tikhonov regularization problem can be shown to be equivalent to traditional formulations of SVM by expressing it in terms of the hinge loss.[10] With the hinge loss

V(yi,f(xi))=(1yf(x))+,

where (s)+=max(s,0), the regularization problem becomes

f=argminf{1ni=1n(1yf(x))++λf2}.

Multiplying by 1/(2λ) yields

f=argminf{Ci=1n(1yf(x))++12f2}

with C=1/(2λn), which is equivalent to the standard SVM minimization problem.

Notes and references

Template:Reflist

  1. Template:Cite journal
  2. Template:Cite web
  3. Template:Cite book
  4. 4.0 4.1 Template:Cite journal
  5. 5.0 5.1 Template:Cite journal
  6. A hypothesis space is the set of functions used to model the data in a machine-learning problem. Each function corresponds to a hypothesis about the structure of the data. Typically the functions in a hypothesis space form a Hilbert space of functions with norm formed from the loss function.
  7. For insight on choosing the parameter, see, e.g., Template:Cite journal
  8. Template:Cite conference
  9. Template:Cite journal
  10. For a detailed derivation, see Template:Cite book