Integral probability metric: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Citation bot
Added arxiv. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Лисан аль-Гаиб | #UCB_webform 188/690
 
(No difference)

Latest revision as of 15:34, 3 May 2024

Template:Short description

In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance. In addition to theoretical importance, integral probability metrics are widely used in areas of statistics and machine learning.

The name "integral probability metric" was given by German statistician Alfred Müller;[1] the distances had also previously been called "metrics with a Template:Math-structure."[2]

Definition

Integral probability metrics (IPMs) are distances on the space of distributions over a set 𝒳, defined by a class of real-valued functions on 𝒳 as D(P,Q)=supf|𝔼XPf(X)𝔼YQf(Y)|=supf|PfQf|; here the notation Template:Math refers to the expectation of Template:Math under the distribution Template:Math. The absolute value in the definition is unnecessary, and often omitted, for the usual case where for every f its negation f is also in .

The functions Template:Math being optimized over are sometimes called "critic" functions;[3] if a particular f* achieves the supremum, it is often termed a "witness function"[4] (it "witnesses" the difference in the distributions). These functions try to have large values for samples from Template:Math and small (likely negative) values for samples from Template:Math; this can be thought of as a weaker version of classifers, and indeed IPMs can be interpreted as the optimal risk of a particular classifier.Template:R

The choice of determines the particular distance; more than one can generate the same distance.[1]

For any choice of , D satisfies all the definitions of a metric except that we may have we may have D(P,Q)=0 for some Template:Math; this is variously termed a "pseudometric" or a "semimetric" depending on the community. For instance, using the class ={x0} which only contains the zero function, D(P,Q) is identically zero. D is a metric if and only if separates points on the space of probability distributions, i.e. for any Template:Math there is some f such that PfQf;[1] most, but not all, common particular cases satisfy this property.

Examples

All of these examples are metrics except when noted otherwise.

Relationship to Template:Math-divergences

The [[F-divergence|Template:Math-divergences]] are probably the best-known way to measure dissimilarity of probability distributions. It has been shownTemplate:R that the only functions which are both IPMs and Template:Math-divergences are of the form cTV(P,Q), where c[0,] and TV is the total variation distance between distributions.

One major difference between Template:Math-divergences and most IPMs is that when Template:Math and Template:Math have disjoint support, all Template:Math-divergences take on a constant value;[16] by contrast, IPMs where functions in are "smooth" can give "partial credit." For instance, consider the sequence δ1/n of Dirac measures at Template:Math; this sequence converges in distribution to δ0, and many IPMs satisfy D(δ1/n,δ0)0, but no nonzero Template:Math-divergence can satisfy this. That is, many IPMs are continuous in weaker topologies than Template:Math-divergences. This property is sometimes of substantial importance,[17] although other options also exist, such as considering Template:Math-divergences between distributions convolved with continuous noise.[17][18]

Estimation from samples

Because IPM values between discrete distributions are often sensible, it is often reasonable to estimate D(P,Q) using a simple "plug-in" estimator: D(P^,Q^) where P^ and Q^ are empirical measures of sample sets. These empirical distances can be computed exactly for some classes ;[19] estimation quality varies depending on the distance, but can be minimax-optimal in certain settings.[13][20][21]

When exact maximization is not available or too expensive, another commonly used scheme is to divide the samples into "training" sets (with empirical measures P^𝑡𝑟𝑎𝑖𝑛 and Q^𝑡𝑟𝑎𝑖𝑛) and "test" sets (P^𝑡𝑒𝑠𝑡 and Q^𝑡𝑒𝑠𝑡), find f^ approximately maximizing |P^𝑡𝑟𝑎𝑖𝑛fQ^𝑡𝑟𝑎𝑖𝑛f|, then use |P^𝑡𝑒𝑠𝑡f^Q^𝑡𝑒𝑠𝑡f^| as an estimate.[22][11][23][24] This estimator can possibly be consistent, but has a negative biasTemplate:R. In fact, no unbiased estimator can exist for any IPMTemplate:R, although there is for instance an unbiased estimator of the squared maximum mean discrepancy.Template:R

References

Template:Reflist