# Karl Pearson’s curious construction of spurious correlation

Karl Pearson (1857-1936) is one of the founders of modern statistics, see this discussion by Stephen Stigler 2008 (and see Stigler’s *The Seven Pillars of Statistical Wisdom* 2016).

I now want to focus on Pearson’s 1897 paper *Mathematical Contributions to the Theory of Evolution.–On a Form of Spurious Correlation Which May Arise When Indices Are Used in the Measurement of Organs. *

The main theme is that if you use the wrong model then the correlations for that model will be *spurious* compared to the true model. Thus, Pearson goes at length to create a wrong model, and compares this to what he claims is the true model. It might be though that he still didn’t develop the true model. Apart from this complexity, it is only admirable that he points to the notion of such spurious correlation in itself.

One example in Pearson’s paper is the measurement of skulls in Bavaria (p495). The issue concerns *compositional data*, i.e. data vectors that add up to a given total, say 100%. The former entry on this weblog presented the inequality / disproportionality measure SDID for votes and seats. These become compositional data when we divide them by their sum totals, so that we compare 100% of the votes with 100% of the seats.

Pearson’s analysis got a sequel in the Aitchison geometry, see this historical exposition by Vera Pawlowsky-Glahn and Juan José Egozcue, *The closure problem: one hundred years of debate*. Early on, I was and still am a fan of the Aitchison & Brown book on the lognormal distribution but I have my doubts about the need of this particular geometry for compositional data. In itself the Aitchison geometry is a contribution, with a vector space, norm and inner product. When we transform the data to logarithms, then multiplication becomes addition, and powers become scalars, so that we can imagine such a vector space, yet, the amazing finding is that rebasing to 1 or 100% can be maintained. It is called “*closure*” when a vector is rebased to a constant sum. What, however, is the added value of using this geometry ?

It may well be that different fields of application still remain different on content, so that when they generate compositional data, then these data are only similar in form, while we should be careful in using the same techniques only because of that similar form. We must also distinguish:

- Problems for compositional data that can be handled by
*both*Sine / Cosine*and*the Aitchison geometry, but for which Sine and Cosine are simpler. - Problems for compositional data that can only be handled by the Aitchison geometry.

An example of the latter might be the paper by Javier Palarea-Albaladejo, Josep Antoni Martın-Fernandez and Jesus A. Soto (2012) in which they compare the compositions of milk of different mammals. I find this difficult to judge on content since I am no biologist. See the **addendum** below on the distance function.

In a fine overview by sheets, Pawlowsky-Glahn, Egozcue & Meziat 2007 present the following example, adapted from Aitchison. They compare two sets of soil samples, of which one sample is contaminated by water. If you want to spot the problem with this analysis yourself, take a try, and otherwise read on.

When the water content in the sample of* A* is dropped, then the test scores are rebased to the total of 100% for *B *again. E.g. for the 60% water in sample 1, this becomes:

{0.1, 0.2, 0.1} / (0.1 + 0.2 + 0.1) = {0.25, 0.5, 0.25}

PM. A more complex example with simulation data is by David Lovell.

##### Reproduction of this example

It is useful to first reproduce the example so that we can later adapt it.

In Wolfram Alpha, we can reproduce the outcome as follows.

For *A,* the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; Correlation[mat1] // Chop // MatrixForm.

For *B*, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Correlation[mat2] // Chop // MatrixForm.

##### The confusion about the correlation

In the former weblog entry, we had SDID[*v, s*] for the votes *v *and seats *s. *In this way of thinking, we would reason differently. *We would compare (correlate) rows and not columns*.

There is also a difference that correlation uses centered data while Sine and Cosine use original or non-centered data. Perhaps this contributed to Pearson’s view.

One possibility is that we compare sample 1 according to *A* with sample 1 according to *B, *as SDID[1A*, 1B]. Since the measures of *A *also contain water, we must drop the water content and create *A*. *The assumption is that *A *and *B *are independent measurements, and that we want to see whether they generate the same result. When the measurements are not affected by the content of water, then we would find zero inequality / disproportionality. However, Pawlowsky et al. do not state the problem as such.

The other possibility is that we would compare SDID[sample *i*, sample *j*].

Instead of using SDID for inequality / disproportionality, let us now use the cosine as a measure for similarity.

For *A, *the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; Outer[cos, mat1, mat1, 1] // Chop // MatrixForm.

Since the water content is not the same in all samples, above scores will be off. To see whether these similarities are sensitive to the contamination by the water content, we look at the samples according to *B.*

The input code for Wolfram Alpha is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Outer[cos, mat2, mat2, 1] // Chop // MatrixForm.

Since the water content differed so much per sample, and apparently is not considered to be relevant for the shares of the other components, the latter matrix of similarities is most relevant.

If we know that the samples are from the same soil, then this would give an indication of sample variability. Conversely, we might have information about the dispersion of samples, and perhaps we might determine whether the samples are from the same soil.

Obviously, one must have studied soil samples to say something on content. The above is only a mathematical exercise. This only highlights the non-transposed case (rows) versus the transposed case (columns).

##### Evaluation

Reading the Pearson 1897 paper shows that he indeed looks at the issue from the angle of the columns, and that he considers calibration of measurements by switching to relative data. He gives various examples, but let me show the case of skull measurement, that may still be a challenge:

Pearson presents two correlation coefficients for B / L with H / L. One based upon the standard definition (that allows for correlations between the levels), and one baptised “spurious”, based upon the assumption of independent distributions (and thus zero correlations for the levels). Subsequently he throws doubt on the standard correlation because of the high value of the spurious correlation.

One must be a biologist or even a skull-specialist to determine whether this is a useful approach. If the true model would use relative data with zero correlations, what is the value of the assumptions of zero or nonzero correlations for the absolute values ? What is useful depends upon the research question too. We can calculate all kinds of statistics, but what decision is intended ?

It is undoubtedly a contribution by Pearson that looking at phenomena in this manner can generate what he calls “spurious correlation”. Whatever the model, it is an insight that using the wrong model can create spurious correlation and a false sense of achievement. I would feel more comfortable though when Pearson had also mentioned the non-transposed case, which I would tend to regard as the proper model, i.e. comparing skulls rather than correlating categories on skulls. Yet he doesn’t mention it.

Apparently the Aitchison geometry provides a solution to Pearson’s approach, thus still looking at transposed (column) data. This causes the same discomfort.

##### Pro memori

The above uses soil and skulls, which are not my expertise. I am more comfortable with votes and seats, or budget shares in economics (e.g. in the Somermeyer model or the indirect addilog demand system, Barten, De Boer).

##### Conclusion

Pearson was not confused on what he defined as spurious correlation. He might have been confused about the proper way to deal with compositional data, namely looking at columns rather than rows. This however also depends upon the field of interest and the research question. Perhaps a historian can determine whether Pearson also looked at compositional data from rows rather than columns.

##### Addendum November 23 2017

For geological data, Watson & Philip (1989) already discussed the angular distance. Martin-Fernandez, Barcelo-Vidal, Pawlowsky-Glahn (2000), “*Measures of differences for compositional data and hierarchical clustering methods*“, discuss distance measures. They also mention the angle between two vectors, found via arccos[cos[*v, s*]], for votes *v* and seats *s*. It is mentioned in the 2nd row of their Table 1. The vectors can also normalised to the unit simplex as *w *= *v */ Sum[*v*] and *z *= *s */ Sum[*s*], though cos is insensitive with cos[*w, z*] = cos[*v, s*].

In sum, the angular distance, or the use of the sine as a distance measure and the cosine as a similarity measure, satisfy the Aitchison criteria of invariance to scale and permutation, but do not satisfy subcompositional dominance and invariance to translation (perturbation).

This discussion makes me wonder whether there are still key differences between compositional data in terms of *concepts*. The compositional form should not distract us from the content. For a Euclidean norm, a translation leaves a distance unaffected, as Norm[*x – y*] = Norm[(*x *+*t*) – (*y* + *t*)}. This property can by copied for logratio data. However, for votes and seats, it is not clear why a (per party different) percentage change vector should leave the distance unaffected (as happens in logratio distance).

An election only gives votes and seats. Thus there is no larger matrix of data. Comparison with other times and nations has limited meaning. Thus there may be no need for the full Aitchison geometry.

At this moment, I can only conclude that Sine (distance) and Cosine (similarity) are better for votes and seats than what political scientists have been using till now. It remains to be seen for votes and seats whether the logratio approach would be better than the angular distance and the use of Sine and Cosine.