Probabilistic metric space

From testwiki
Jump to navigation Jump to search

Template:More citations needed In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers Template:Math, but in distribution functions.[1]

Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into [0, 1] such that max(F) = 1).

Then given a non-empty set S and a function F: S ร— S โ†’ D+ where we denote F(p, q) by Fp,q for every (p, q) โˆˆ S ร— S, the ordered pair (S, F) is said to be a probabilistic metric space if:

History

Probabilistic metric spaces are initially introduced by Menger, which were termed statistical metrics.[3] Shortly after, Wald criticized the generalized triangle inequality and proposed an alternative one.[4] However, both authors had come to the conclusion that in some respects the Wald inequality was too stringent a requirement to impose on all probability metric spaces, which is partly included in the work of Schweizer and Sklar.[5] Later, the probabilistic metric spaces found to be very suitable to be used with fuzzy sets[6] and further called fuzzy metric spaces[7]

Probability metric of random variables

A probability metric D between two random variables X and Y may be defined, for example, as D(X,Y)=|xy|F(x,y)dxdy where F(x, y) denotes the joint probability density function of the random variables X and Y. If X and Y are independent from each other, then the equation above transforms into D(X,Y)=|xy|f(x)g(y)dxdy where f(x) and g(y) are probability density functions of X and Y respectively.

One may easily show that such probability metrics do not satisfy the first metric axiom or satisfies it if, and only if, both of arguments X and Y are certain events described by Dirac delta density probability distribution functions. In this case: D(X,Y)=|xy|δ(xμx)δ(yμy)dxdy=|μxμy| the probability metric simply transforms into the metric between expected values μx, μy of the variables X and Y.

For all other random variables X, Y the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric space, that is: D(X,X)>0.

Probability metric between two random variables X and Y, both having normal distributions and the same standard deviation σ=0,σ=0.2,σ=0.4,σ=0.6,σ=0.8,σ=1 (beginning with the bottom curve). mxy=|μxμy| denotes a distance between means of X and Y.

Example

For example if both probability distribution functions of random variables X and Y are normal distributions (N) having the same standard deviation σ, integrating D(X,Y) yields: DNN(X,Y)=μxy+2σπexp(μxy24σ2)μxyerfc(μxy2σ) where μxy=|μxμy|, and erfc(x) is the complementary error function.

In this case: limμxy0DNN(X,Y)=DNN(X,X)=2σπ.

Probability metric of random vectors

The probability metric of random variables may be extended into metric D(X, Y) of random vectors X, Y by substituting |xy| with any metric operator d(x, y): D(๐—,๐˜)=ΩΩd(๐ฑ,๐ฒ)F(๐ฑ,๐ฒ)dΩxdΩy where F(X, Y) is the joint probability density function of random vectors X and Y. For example substituting d(x, y) with Euclidean metric and providing the vectors X and Y are mutually independent would yield to: D(๐—,๐˜)=ΩΩi|xiyi|2F(๐ฑ)G(๐ฒ)dΩxdΩy.

References

Template:Reflist