Cramér's V

From testwiki
Revision as of 21:47, 28 March 2024 by imported>Luxorr (Usage and interpretation: ce)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φc) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based on Pearson's chi-squared statistic and was published by Harald Cramér in 1946.[1]

Usage and interpretation

φc is the intercorrelation of two discrete variables[2] and may be used with variables having two or more levels. φc is a symmetrical measure: it does not matter which variable we place in the columns and which in the rows. Also, the order of rows/columns does not matter, so φc may be used with nominal data types or higher (notably, ordered or numerical).

Cramér's V varies from 0 (corresponding to no association between the variables) to 1 (complete association) and can reach 1 only when each variable is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.

φc2 is the mean square canonical correlation between the variables.Template:Citation needed

In the case of a 2 × 2 contingency table Cramér's V is equal to the absolute value of Phi coefficient.

Calculation

Let a sample of size n of the simultaneously distributed variables A and B for i=1,,r;j=1,,k be given by the frequencies

nij= number of times the values (Ai,Bj) were observed.

The chi-squared statistic then is:

χ2=i,j(nijni.n.jn)2ni.n.jn,

where ni.=jnij is the number of times the value Ai is observed and n.j=inij is the number of times the value Bj is observed.

Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the minimum dimension minus 1:

V=φ2min(k1,r1)=χ2/nmin(k1,r1),

where:

  • φ is the phi coefficient.
  • χ2 is derived from Pearson's chi-squared test
  • n is the grand total of observations and
  • k being the number of columns.
  • r being the number of rows.

The p-value for the significance of V is the same one that is calculated using the Pearson's chi-squared test.Template:Citation needed

The formula for the variance of Vc is known.[3]

In R, the function cramerV() from the package rcompanion[4] calculates V using the chisq.test function from the stats package. In contrast to the function cramersV() from the lsr[5] package, cramerV() also offers an option to correct for bias. It applies the correction described in the following section.

Bias correction

Cramér's V can be a heavily biased estimator of its population counterpart and will tend to overestimate the strength of association. A bias correction, using the above notation, is given by[6]

V~=φ~2min(k~1,r~1) 

where

φ~2=max(0,φ2(k1)(r1)n1) 

and

k~=k(k1)2n1 
r~=r(r1)2n1 

Then V~ estimates the same population quantity as Cramér's V but with typically much smaller mean squared error. The rationale for the correction is that under independence, E[φ2]=(k1)(r1)n1.[7]

See also

Other measures of correlation for nominal data:

Other related articles:

References

Template:Reflist

Template:Statistics

  1. Cramér, Harald. 1946. Mathematical Methods of Statistics. Princeton: Princeton University Press, page 282 (Chapter 21. The two-dimensional case). Template:ISBN (table of content Template:Webarchive)
  2. Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.
  3. Liebetrau, Albert M. (1983). Measures of association. Newbury Park, CA: Sage Publications. Quantitative Applications in the Social Sciences Series No. 32. (pages 15–16)
  4. Template:Cite web
  5. Template:Cite web
  6. Template:Cite journal
  7. Template:Cite journal
  8. Template:Cite journal