To implement the following measures of correlation, we use the â€˜wineâ€™ dataset from the **HDclassif** package. To wet your palate, we test for correlation between â€˜alcoholâ€™ and â€˜prolineâ€™ (an amino acid) content. Note that we employ the correlation measure tests from the **psych**, rather than **base**Â package.

library(psych) library(HDclassif) data(wine)

**Pearsonâ€™s Rho Correlation**

The Pearson correlation coefficient is a measure of the linear dependence between two variables and has support between âˆ’1 and 1. Here, 1 implies total positive correlation, 0 is no correlation, and âˆ’1 implies total negative correlation. For a population, the Pearson correlation between variables *X* and *Y* is defined by, where Â denotes the expectation operator, and *Â *indicates variance. The sample correlation, which is the estimate of Â based on a sample of observations in T timesteps, is computed by

where Â is the pair of the observed X and Y values in quarter t, , and Â and Â are the averages of the observed X and Y values, respectively. The major caveat of the Pearson correlation coefficient is its presumption of the relationship between X and Y. Firstly, it supposes that X and Y are continuous variables (i.e. a practical limitation). Secondly, that X and Y have a pairwise association; in other words, for each there is an . Thirdly, the relationship between the variables is perfect; there are no outliers that skew the relationship in either direction. Fourthly, and arguably the most critical assumption, is the linearity of the relationship. When plotting X against Y, a linear increase in X corresponds to a linear increase or decrease in Y. Finally, homoscedasticity refers to the variance of the relationship being constant. Essentially, when viewing the scatterplot of X against Y, the dispersion should be consistent across all values.

To perform in R:

corr.test(wine[, c("V1","V13")], method='pearson')

Which outputs:

Call:corr.test(x = wine[, c("V1", "V13")], method = "pearson") Correlation matrix V1 V13 V1 1.00 0.64 V13 0.64 1.00 Sample Size [1] 178 Probability values (Entries above the diagonal are adjusted for multiple tests.) V1 V13 V1 0 0 V13 0 0

**Spearmanâ€™s Rank Correlation**

Spearman rank correlation is a nonparametric measure of association between variables measured on an ordinal, interval, or ratio scale that exhibit a monotonic relationship. For interval or ratio data (i.e. continuous), it differs from Pearson in that the relationship between variables need not be linear, nor homoscedastic. Otherwise, the general assumptions regarding the relationship between X and Y are presumed. The benefit to the Spearman correlation measure is the lack of imposition on the relationship; notably, it assumes a monotonic relationship between X and Y exists. In other words, the two variables increase/decrease in value together or when one increases and the other decreases. The magnitude by which this occurs is not a presumption in this measure; this correlation measure is based on the ranks of the data values in each variable.Â The formula for calculating the Spearmanâ€™s rank correlation is:

Where is the difference between ranks.Â Â Â Spearman's rank correlation is satisfactory for determining the relationship between two variables but it may be difficult to interpret Spearman correlation intuitively because of its quadratic form. Kendall's rank correlation improves upon this by reflecting the strength of the dependence between the variables being compared, without penalizing the squares of discordance.

To perform in R:

corr.test(wine[, c("V1","V13")], method='spearman')

Which outputs:

Call:corr.test(x = wine[, c("V1", "V13")], method = "spearman") Correlation matrix V1 V13 V1 1.00 0.63 V13 0.63 1.00 Sample Size [1] 178 Probability values (Entries above the diagonal are adjusted for multiple tests.) V1 V13 V1 0 0 V13 0 0

**Kendallâ€™s Tau Correlation**

Kendall's Tau correlation is another non-parametric test of correlation and a measure of the strength of dependence between two variables. Consider two samples of size n: X and Y. The total number of possible pairings of X with Y observations is . Now sort X and Y independently and consider their ranked pairings. If when ordered on both X and Y then this pair is discordant, otherwise the pair is concordant. The numerator isÂ the difference between the number of concordant (ordered in the same way, ) and discordant (ordered differently, ) pairs.

Tau is givenÂ by:

If there are tied (same value) observations then the following formula is used:

where Â is the number of observations tied at a particular rank of X and Â is the number tied at a rank of Y.Â In summary, Kendallâ€™s Tau penalizes disordered pairs by the distance of their disorder. The main difference being that Spearmanâ€™s Rho penalizes disordered pairs by the *squared* distance of their disorder.

To perform in R:

corr.test(wine[, c("V1","V13")], method='kendall')

Which outputs:

Call:corr.test(x = wine[, c("V1", "V13")], method = "kendall") Correlation matrix V1 V13 V1 1.00 0.45 V13 0.45 1.00 Sample Size [1] 178 Probability values (Entries above the diagonal are adjusted for multiple tests.) V1 V13 V1 0 0 V13 0 0

As a rank-based measure, Kendallâ€™s Tau is arguably superior to Spearmanâ€™s Rho from an interpretation perspective (Newson, 2002). In spite of this, Spearman and Kendall are intimately related. Note that based on a convergence criterion of Â from (5) in (Xu et al, 2010) we obtain for large n:

since . Hence, asymptotically when Kendallâ€™s Tau is equal to 1, we ensure that Spearmanâ€™s Rho is equal to 1.

While the threshold for determining significance of correlation differs depending on the application and correlation measure, it is common to use as a cut-off for strong correlation.

[1]Xu,Â Weichao;Â Hou,Â Yunhe;Â Hung,Â Y.Â S.;Â Zou,Â Yuexian. Comparison of Spearman's rho and Kendall's tau in Normal and Contaminated Normal Models. eprint arXiv:1011.2009.

[2] Newson R.Â Parameters behind "nonparametric" statistics: Kendall's tau,Somers' D and median differences.Â Stata JournalÂ 2002; 2(1):45-64.