Cramér's V
Template:Short description In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φc) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based on Pearson's chi-squared statistic and was published by Harald Cramér in 1946.[1]
Usage and interpretation
φc is the intercorrelation of two discrete variables[2] and may be used with variables having two or more levels. φc is a symmetrical measure: it does not matter which variable we place in the columns and which in the rows. Also, the order of rows/columns does not matter, so φc may be used with nominal data types or higher (notably, ordered or numerical).
Cramér's V varies from 0 (corresponding to no association between the variables) to 1 (complete association) and can reach 1 only when each variable is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
φc2 is the mean square canonical correlation between the variables.Template:Citation needed
In the case of a 2 × 2 contingency table Cramér's V is equal to the absolute value of Phi coefficient.
Calculation
Let a sample of size n of the simultaneously distributed variables and for be given by the frequencies
- number of times the values were observed.
The chi-squared statistic then is:
where is the number of times the value is observed and is the number of times the value is observed.
Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the minimum dimension minus 1:
where:
- is the phi coefficient.
- is derived from Pearson's chi-squared test
- is the grand total of observations and
- being the number of columns.
- being the number of rows.
The p-value for the significance of V is the same one that is calculated using the Pearson's chi-squared test.Template:Citation needed
The formula for the variance of V=φc is known.[3]
In R, the function cramerV() from the package rcompanion[4] calculates V using the chisq.test function from the stats package. In contrast to the function cramersV() from the lsr[5] package, cramerV() also offers an option to correct for bias. It applies the correction described in the following section.
Bias correction
Cramér's V can be a heavily biased estimator of its population counterpart and will tend to overestimate the strength of association. A bias correction, using the above notation, is given by[6]
where
and
Then estimates the same population quantity as Cramér's V but with typically much smaller mean squared error. The rationale for the correction is that under independence, .[7]
See also
Other measures of correlation for nominal data:
- The Percent Maximum Difference[8]
- The phi coefficient
- Tschuprow's T
- The uncertainty coefficient
- The Lambda coefficient
- The Rand index
- Davies–Bouldin index
- Dunn index
- Jaccard index
- Fowlkes–Mallows index
Other related articles:
References
External links
- A Measure of Association for Nonparametric Statistics (Alan C. Acock and Gordon R. Stavig Page 1381 of 1381–1386)
- Nominal Association: Phi and Cramer's Vl from the homepage of Pat Dattalo.
- ↑ Cramér, Harald. 1946. Mathematical Methods of Statistics. Princeton: Princeton University Press, page 282 (Chapter 21. The two-dimensional case). Template:ISBN (table of content Template:Webarchive)
- ↑ Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.
- ↑ Liebetrau, Albert M. (1983). Measures of association. Newbury Park, CA: Sage Publications. Quantitative Applications in the Social Sciences Series No. 32. (pages 15–16)
- ↑ Template:Cite web
- ↑ Template:Cite web
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal