Relevance vector machine

From testwiki
Revision as of 07:51, 22 November 2024 by imported>Faultier1 (Explained additional references)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Machine learning In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification.[1] A greedy optimisation procedure and thus fast version were subsequently developed.[2][3] The RVM has an identical functional form to the support vector machine, but provides probabilistic classification.

It is actually equivalent to a Gaussian process model with covariance function:

k(𝐱,𝐱)=j=1N1αjφ(𝐱,𝐱j)φ(𝐱,𝐱j)

where φ is the kernel function (usually Gaussian), αj are the variances of the prior on the weight vector wN(0,α1I), and 𝐱1,,𝐱N are the input vectors of the training set.[4]

Compared to that of support vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use an expectation maximization (EM)-like learning method and are therefore at risk of local minima. This is unlike the standard sequential minimal optimization (SMO)-based algorithms employed by SVMs, which are guaranteed to find a global optimum (of the convex problem).

The relevance vector machine was patented in the United States by Microsoft (patent expired September 4, 2019).[5]

See also

References

Template:Reflist

Software