Ball covariance

From testwiki
Jump to navigation Jump to search

Template:Orphan Template:Short description Ball covariance is a statistical measure that can be used to test the independence of two random variables defined on metric spaces.[1] The ball covariance is zero if and only if two random variables are independent, making it a good measure of correlation. Its significant contribution lies in proposing an alternative measure of independence in metric spaces. Prior to this, distance covariance in metric spaces[2] could only detect independence for distance types with strong negative type. However, ball covariance can determine independence for any distance measure.

Ball covariance uses permutation tests to calculate the p-value. This involves first computing the ball covariance for two sets of samples, then comparing this value with many permutation values.

Background

Correlation, as a fundamental concept of dependence in statistics, has been extensively developed in Hilbert spaces, exemplified by the Pearson correlation coefficient,[3] Spearman correlation coefficient,[4] and Hoeffding's dependence measure.[5] However, with the advancement of time, many fields require the measurement of dependence or independence between complex objects, such as in medical imaging, computational biology, and computer vision. Examples of complex objects include Grassmann manifolds, planar shapes, tree-structured data, matrix Lie groups, deformation fields, symmetric positive definite (SPD) matrices, and shape representations of cortical and subcortical structures. These complex objects mostly exist in non-Hilbert spaces and are inherently nonlinear and high-dimensional (or even infinite-dimensional). Traditional statistical techniques, developed in Hilbert spaces, may not be directly applicable to such complex objects. Therefore, analyzing objects that may reside in non-Hilbert spaces poses significant mathematical and computational challenges.

Previously, a groundbreaking work in metric space independence tests was the distance covariance in metric spaces proposed by Lyons (2013).[2] This statistic equals zero if and only if random variables are independent, provided the metric space is of strong negative type. However, testing the independence of random variables in spaces that do not meet the strong negative type condition requires new explorations.[6]

Definition

Ball covariance

Next, we will introduce ball covariance in detail, starting with the definition of a ball. Suppose two Banach spaces: (𝐗,ρ) and (𝐘,ζ), where the norms ρ and ζ also represent their induced distances. Let θ be a Borel probability measure on 𝐗×𝐘,μ,ν be two Borel probability measures on 𝐗,𝐘, and (X,Y) be a B-valued random variable defined on a probability space such that (X,Y)θ,Xμ, and Yν. Denote the closed ball with the center x1 and the radius ρ(x1,x2) in 𝐗 as BΒ―(x1,ρ(x1,x2)) or BΒ―ρ(x1,x2), and the closed ball with the center y1 and the radius ζ(y1,y2) in 𝐘 as BΒ―(y1,ζ(y1,y2)) or BΒ―ζ(y1,y2). Let {Wi=(Xi,Yi),i=1,2,} be an infinite sequence of iid samples of (X,Y), and ω=(ω1,ω2) be the positive weight function on the support set of θ. Then, the population ball covariance can be defined as follows:

BCovω2(X,Y)=[θμν]2(BΒ―ρ(x1,x2)×BΒ―ζ(y1,y2))ω1(x1,x2)ω2(y1,y2)θ(dx1,dy1)θ(dx2,dy2)

where[θμν]2(A×B):=[θ(A×B)μ(A)v(B)] for A𝐗 and B𝐘.

Next, we will introduce another form of population ball covariance. Suppose δij,kX:=I(XkBΒ―ρ(Xi,Xj)) which indicates whether Xk is located in the closed ball BΒ―ρ(Xi,Xj). Then, let δij,klX=δij,kXδij,lX means whether both Xk and Xl is located in BΒ―ρ(Xi,Xj), and ξij,klstX=(δij,klX+δij,stXδij,ksXδij,ltX)/2. So does δij,kY, δij,klY and ξij,klstY for Y. Then, let (Xi,Yi), i=1,2,,6 be iid samples from θ. Another form of population ball covariance can be shown as

BCovω2(X,Y)=E{ξ12,3456Xξ12,3456Yω1(X1,X2)ω2(Y1,Y2)}

Now, we can finally express the sample ball covariance. Consider the random sample (𝐗,𝐘=Xk,Yk,k=1,,n). Let ω^1,n and ω^2,n be the estimate of ω1 and ω2. Denote Δij,nXY=1nk=1nδij,kXδij,kY,Δij,nX=1nk=1nδij,kX,Δij,nY=1nk=1nδij,kY, the sample ball covariance is 𝐁𝐂𝐨𝐯ω,n2(𝐗,𝐘):=1n2i,j=1n(Δij,nXYΔij,nXΔij,nY)2×ω^1,n(Xi,Xj)ω^2,n(Yi,Yj).

Ball correlation

Just like the relationship between the Pearson correlation coefficient and covariance, we can define the ball correlation coefficient through ball covariance. The ball correlation is defined as the square root of

BCorω2(X,Y):=BCovω2(X,Y)/𝐁𝐂𝐨𝐯ω2(X)𝐁𝐂𝐨𝐯ω2(Y),

where 𝐁𝐂𝐨𝐯ω2(X)=𝐁𝐂𝐨𝐯ω2(X,X)=E(ξ12,3456Xω1(X1,X2))2, and 𝐁𝐂𝐨𝐯ω2(Y)=𝐁𝐂𝐨𝐯ω2(Y,Y)=E(ξ12,3456Yω1(Y1,Y2))2. And the sample ball correlation is defined similarly, BCorω,n2(X,Y):=BCovω,n2(X,Y)/𝐁𝐂𝐨𝐯ω,n2(X)𝐁𝐂𝐨𝐯ω,n2(Y), where 𝐁𝐂𝐨𝐯ω,n2(X)=𝐁𝐂𝐨𝐯ω,n2(X,X), and 𝐁𝐂𝐨𝐯ω,n2(Y)=𝐁𝐂𝐨𝐯ω,n2(Y,Y).

Properties

1.Independence-zero equivalence property: Let Sθ, Sμ and Sν denote the support sets of θ, μ and ν, respectively. BCovω(X,Y)=0 implies θ=μν if one of the following conditions establish:

(a).𝐗×𝐘 is a finite dimensional Banach space with Sθ=Sμ×Sν.

(b).θ=a1θd+a2θa, where a1 and a2 are positive constants, θd is a discrete measure, and θa is an absolutely continuous measure with a continues Radon–Nikodym derivative with respect to the Gaussian measure.

2.Cauchy–Schwarz type inequality: BCovω2(X,Y)BCovω(X)BCovω(X)

3.Consistence: If ω^1,n and ω^2,n uniformly converge ω1 and ω2 with E(ω1ω2)< respectively, we have BCovω,n(𝐗,𝐘)a.s.nBCovω(X,Y) and BCorω,n(𝐗,𝐘)a.s.nBCorω(X,Y).

4.Asymptotics: If ω^1,n and ω^2,n uniformly converge ω1 and ω2 with E(ω1ω2)< respectively, (a)under the null hypothesis, we have n𝐁𝐂𝐨𝐯ω,n2(𝐗,𝐘)β†’ndv=1λvZv2, where Zv are independent standard normal random variables.

(b)under the alternative hypothesis, we have n(𝐁𝐂𝐨𝐯ω,n2(𝐗,𝐘)𝐁𝐂𝐨𝐯ω2(X,Y))β†’ndN(0,Σ).

References

Template:Reflist