Archive

Role of mathematics

Let us look beyond Brexit and determine the implications w.r.t. democracy itself. We can conclude that the UK has an intellectual community that is quite blind on the very notion of democracy. When the educated run astray then there is only an anchor in the democratic notions of the whole population, and this opens the doors to what is called “populism”.

I started looking into Brexit after the surprise referendum outcome in 2016. This memo sums up my findings over the last two years. The following identifies where the educated community in the UK is in need of re-educating themselves.

Earlier in 1990-1994 I already concluded that Montesquieu’s model of the separation of powers of the Trias Politica fails in a key aspect since its conception in 1748. Democracies need the fourth power of an Economic Supreme Court, see (2014). It is necessary to mention this earlier conclusion that predates Brexit, but let us now continue with findings following Brexit.

To start with: What does the UK electorate really want w.r.t. Brexit or Bremain ? Both the Referendum of 2016 and the General Election of 2017 do not provide adequate information. One would think that it is rather damning for a claimed democracy when its procedures do not result into adequate clarity on such a fundamental issue.

The 2016 Referendum Question concerned the legal issue of Leave or Remain but was disinformative about the ways of Leaving or Remaining. The political parties that are elected into the House of Commons are split on both direction and ways as well. The overall situation can only be described as chaotic. One might try to characterise this more positively as that a population with divided views generated a House of Commons with divided views, which would be democracy itself, but this neglects that there is no information about what those divided views actually are. The true process is “garbage in, garbage out” and this doesn’t fit the definition of democracy.

The very Brexit or Bremain Referendum Question fails the criteria for a decent statistical enquiry. I am surprised that the Royal Statistical Society (RSS) did not protest. The question of Leave or Remain is a binary legal issue but the true issue are the policy options. It took some time to analyse this, but with the help of Anthony Wells of YouGov.com I managed to dissect this, see (2017abc). Some 17 per cent of voters ranked Remain between different versions of Leave, which implies a grand game of guessing what to vote for, and which means that the Referendum failed on its purpose of expression of preferences. The UK Electoral Commission missed this but it does not care about this and is happy to take the legal position. They claim to provide proper information to the general public, but what they regard as “information” is regarded by statistical science as disinformation (but the RSS is silent on this). One is reminded of Byzantium instead of claimed modernity.

The main question is why the UK had the referendum in the first place. In Holland since 1917 there is system of equal proportional representation (EPR) for the House of Commons so that referenda are not required. The UK has a system of district representation (DR) that lacks such proportionality, and that invites the confusion that referenda might be used to find out what the electorate really thinks. The latter is a confusion indeed, since it neglects the important role of bargaining, see (2017c).

This diagnosis set me on the course of investigating why the USA, UK and France have DR and not EPR. My original thought was that a party that won an election would have no reason to change a system that caused its election. This would explain why the USA, UK and France were stuck with DR and did not switch to EPR. Last year I discovered that the true cause is different. My finding for the UK is that there is an amazing blindness in the UK intellectual community. The report in (2018a) causes a chill down the spine. It appears that “political science on electoral systems” is no science yet, but still solidly within the Humanities, and alike astrology, alchemy and homeopathy. The eye-opener is that these academics use the same word “election” for both DR and EPR while they actually have entirely different meanings. In reality only EPR has proper elections fitting of proper democracy. The DR system is a proto-democracy that relies on contests. Political “science” is blind to what this means not only for proper scientific analysis but also for communication with the general public. Voters are disinformed on a grand scale, both in the textbooks in government classes and in public discussion e.g. at “election” nights.

Compare physics that also borrowed words from colloquial English, like “force” and “mass”. Yet in physics these words have recieved precise meaning. In physics, gravity in Holland has the same meaning as gravity in the UK. Political “science” uses colloquial terms like “election” and “democracy” but those meanings are not fixed. An “election” in Holland with EPR is entirely different from an “election” in the UK with DR. Political “science” thus uses terms that confuse both the academics and the public. When historians describe how the West developed into democracy, they occlude the fact that the USA, UK and France are still in a proto-democratic phase.

A first complication is: There appears to be a special role for the UK Electoral Reform Society (ERS) founded in 1884 and originally known as the Proportional Representation Society. Here we find an independent and disinterested group that criticises DR and that claims to further the UK on the historical path towards EPR. However, it appears that ERS wants a transferable vote, while their claim that transferability generates proportionality is simply false. Such distortion contributed to the debacle of the 2011 Referendum on the “alternative vote”, which is a counterproductive construct to start with. When one presents the ERS with this criticism then the reply appears to be disingenuous. Instead of a clear adoption of EPR, either in the Dutch version or like the UK elections for the EU Parliament, with their wealth of experience by actual application, one can only conclude that the ERS is addicted to this notion of a transferable vote, and they want this model at any cost. Psychology might explain how such zealotism may arise but it remains far removed from proper information for the general public.

A second complication is: There appears to exist a confusion w.r.t. the interpretation of Arrow’s Impossibility Theorem on democracy. In this, there is a major role for mathematicians who mainly look at models and who neglect empirical science. This leads too far for this memo, and an overview is given in (2018e).

A third complication is: There is the interference by a grand coalition of statistics and political science (with some ambiguity whether quotation marks should be used) in creating a black hole on democracy and its measurement, see (2018bcd). Political science never managed to find a good measure for the difference between vote shares and seat shares. My proposal is to use the “sine-diagonal inequality / disproportionality” (SDID) measure, that does for democracy what the Richter scale does for earthquakes. Political science has shown less understanding of statistics, or perhaps failed in finding such a measure because statistical science did not develop this theory or did not understand what the political scientists were looking for. This hole has been plugged now, see (2018bcd). Nevertheless, this diagnosis calls for a reorganisation of university courses in statistics and political science.

The enclosed graph highlights the “perfect storm” of blindness of the intellectual community that lurks behind Brexit. The figure is documented in (2018d). The main idea is that statistics and other sciences like physics, biology, psychometrics and econometrics could help “political science on electoral systems” to become a proper science. Then science can provide adequate information to the general public.

A conclusion is: The UK electoral system has “winner take all” district representation (DR) that does not provide for equal proportional representation (EPR) of what voters want. Again the word “representation” means something else for proto-democratic DR versus democratic EPR. My suggestion is that the UK switches to EPR, say adopt the Dutch system of open lists, has new elections, and let the new House discuss Brexit or Bregret again. Bregret is defined by that the House adopted Brexit before and thus might reconsider. It is not unlikely that the EU would allow the UK the time for such a fundamental reconsideration on both electoral system and Brexit.

It remains to be seen whether the UK electorate would want to stick to the current system of DR or rather switch to EPR. The first step is to provide the UK electorate with adequate information. For this, the UK intellectual community must get its act together on what this information would be. A suggestion is to check the analysis that I have provided here.

 

References

Colignatus (2014), “An economic supreme court”, RES Newsletter issue 167, October, pp. 20-21
Colignatus (2017a), “Voting theory and the Brexit referendum question”, RES Newsletter, Issue 177, April, pp. 14-16
Colignatus (2017b), “Great Britain’s June 2017 preferences on Brexit options”, RES Newsletter, Issue 177, October, http://www.res.org.uk/view/art2Oct17Features.html
Colignatus (2017c), “Dealing with Denial: Cause and Cure of Brexit”, https://boycottholland.wordpress.com/2017/12/01/dealing-with-denial-cause-and-cure-of-brexit/
Colignatus (2018a), “One woman, one vote. Though not in the USA, UK and France”, https://mpra.ub.uni-muenchen.de/84482/
Colignatus (2018b), “Comparing votes and seats with cosine, sine and sign, with attention for the slope and enhanced sensitivity to inequality / disproportionality”, https://mpra.ub.uni-muenchen.de/84469/
Colignatus, (2018c), “An overview of the elementary statistics of correlation, R-Squared, cosine, sine, Xur, Yur, and regression through the origin, with application to votes and seats for parliament ”, https://doi.org/10.5281/zenodo.1227328
Colignatus, (2018d), “An overview of the elementary statistics of correlation, R-Squared, cosine, sine, Xur, Yur, and regression through the origin, with application to votes and seats for parliament (sheets)”, Presentation at the annual meeting of Dutch and Flemish political science, Leiden June 7-8, https://zenodo.org/record/1270381
Colignatus, (2018e), “The solution to Arrow’s difficulty in social choice (sheets)”, Second presentation at the annual meeting of Dutch and Flemish political science, Leiden June 7-8, https://zenodo.org/record/1269392

Advertisements

The US National Governors Association (NGA Center for Best Practices) and the Council of Chief State School Officers (CCSSO) are the makers of the US Common Core State Standards (CCSS).

The CCSS refer to the Trends in International Mathematics and Science Study (TIMSS) (wikipedia).

TIMSS is made by the International Association for the Evaluation of Educational Achievement (IEA) (wikipedia). It so happens that IEA has its headquarters in Amsterdam but the link to Holland is only historical.

I am wondering whether CCSS and TIMSS adequately deal with the redesign of mathematics education.

There are conditions under which TIMSS is invalid.

There are conditions under which TIMSS is incomplete.

See my letter to IEA (makers of TIMSS) and NGA Center and CCSSO (users of TIMSS, makers of CCSS).

Mathematics concerns patterns and can involve anything, so that we need flexibility in our tools when we do or use mathematics. In the dawn of mankind we used stories. When writing was invented we used pen and paper. It is a revolution for mankind, comparable to the invention of the wheel and the alphabet, that we now can do mathematics using a computer. Many people focus on the computer and would say that it is a computer revolution, but computers might also generate chaos, which shows that the true relevance comes from structured use.

I regard mathematics by computer as a two-sided coin, that involves both human thought (supported by tools) and what technically happens within a computer. The computer language (software) is the interface between the human mind and the hardware with the flow of electrons, photons or whatever (I am no physicist). We might hold that thought is more fundamental, but this is of little consequence, since we still need consistency that 1+1 = 2 in math also is 1+1 = 2 in the computer, and properly interfaced by the language that would have 1+1 = 2 too. The clearest expression of mathematics by computer is in “computer algebra” languages, that understand what this revolution for mankind is about, and which were developed for the explicit support of doing mathematics by computer.

The makers of Mathematica (WRI) might be conceptually moving to regarding computation itself as a more fundamental notion than mathematics or the recognition and handling of patterns. Perhaps in their view there would be no such two-sided coin. The brain might be just computation, the computer would obviously be computation, and the language is only a translator of such computations. The idea that we are mainly interested in the structured products of the brain could be less relevant.

Stephen Wolfram by origin is a physicist and the name “Mathematica” comes from Newton’s book and not from “mathematics” itself, though Newton made that reference. Stephen Wolfram obviously has a long involvement with cellular automata, culminating in his New Kind of Science. Wolfram (2013) distinguishes Mathematica as a computer program from the language that the program uses and is partially written in. Eventually he settled for the term “Wolfram language” for the computer language that he and WRI use, like “English” is the language used by the people in England (codified by their committees on the use of the English language).

My inclination however was to regard “Mathematica” primarily as the name of the language that happened to be evaluated by the program of the same name. I compared Mathematica to Algol and Fortran. I found Wolfram’s Addison-Wesley book title in 1991 & 1998 “Mathematica. A system for doing mathematics by computers” as quite apt. Obviously the system consists of the language and the software that runs it, but the latter might be provided by other providers too, like Fortran has different compilers. Every programmer knows that the devil is in the details, and that a language documentation on paper might not give the full details of actually running the software. Thus when there are not more software providers then it is only accurate to state the the present definition of the language is given precisely by the one program that runs it. This is only practical and not fundamental. In this situation there is no conflict in thinking of “Mathematica as the language of Mathematica“. Thus in my view there is no need to find a new name for the language. I thought that I was using a language but apparently in Wolfram’s recent view the emphasis was on the computer program. I didn’t read Wolfram’s blog in 2013 and otherwise might have given this feedback.

Wolfram (2017) and (2018) uses the terms “computational essay” and “computational thinking” while the latter is used such that he apparently intends this to mean something like (my interpretation): programming in the Wolfram Language, using internet resources, e.g. the cloud and not necessarily the stand-alone version of Mathematica or now also Wolfram Desktop. My impression is that Wolfram indeed emphasizes computation, and that he perhaps also wants to get rid of a popular confusion of the name “Mathematica” with mathematics only. Apparently he doesn’t want to get rid of that name altogether, likely given his involvement in its history and also its fine reputation.

A related website is https://www.computerbasedmath.org (CBM) by Conrad Wolfram. Most likely Conrad adopts Stephen’s view on computation. It might also be that CBM finds the name “Mathematica” disinformative, as educators (i) may be unaware of what this language and program is, (ii) may associate mathematics with pen and paper, and (iii) would pay attention however at the word “computer”. Perhaps CBM also thinks: You better adopt the language of your audience than teach them to understand your terminology on the history of Mathematica.

I am not convinced by these recent developments. I still think: (1) that this is a two-sided coin (but I am no physicist and do no know about electrons and such), (2) that it is advantageous to clarify to the world: (2a) that mathematics can be used for everything, and (2b) that doing mathematics by computer is a revolution for mankind, and (3) that one should beware of people without didactic training who want to ship computer technology into the classroom. My suggestion to Stephen Wolfram remains, as I did before in (2009, 2015a), that he turns WRI into a public utility like those that exist in Holland – while it already has many characteristics of this. It is curious to see the open source initiatives that apparently will not use the language of Mathematica, now by WRI (also) called the Wolfram Language, most likely because of copyright fears even while it is good mathematics.

Apparently there are legal concerns (but I am no lawyer) that issues like 1+1 = 2 or [Pi] are not under copyright, but that choices for software can be. For example the use of h[x] with square brackets rather than parentheses h(x), might be presented to the copyright courts as a copyright issue. This is awkward, because it is good didactics of mathematics to use the square brackets. Not only computers but also kids may get confused by expressions a(2 + b) and f(x + h) – f(x). Let me refer to my suggestion that each nation sets up its own National Center for Mathematics Education. Presently we have a jungle that is no good for WRI, no good for the open source movement (e.g. R or https://www.python.org or http://jupyter.org), and especially no good for the students. Everyone will be served by clear distinctions between (i) what is in the common domain for mathematics and education of mathematics (the language) and (ii) what would be subject to private property laws (programs in that language, interpreters and compilers for the language) (though such could also be placed into the common domain).

PM. This test has been included in the May 23 2018 article on Arithmetic with H = -1. Earlier related blogs are here and here.

Colignatus, Th. (2009, 2015a), Elegance with Substance, (1) website  (2) PDF on Zenodo

Wolfram, S. (1991, 1998), Mathematica. A system for doing mathematics by computer, 2nd edition, Addison-Wesley

Wolfram, S. (2013), What Should We Call the Language of Mathematica?, weblog

Wolfram, S. (2017), What Is a Computational Essay?, weblog

Wolfram, S. (2018), Launching the Wolfram Challenges Site, weblog

For our understanding of history we like to distinguish between structural developments and contingencies.

Examples of structure would be the rise of the world population and Jared Diamond’s Guns, Germs, and Steel. Obviously, various authors have various suggestions for what they consider to be structure, but the lack of consensus generally doesn’t matter as long as the discussion continues, and as long as people are aware that there are different points of view. It is rather tricky to identify structure for the here and now because it might require the perspective of some centuries to arrive at proper evaluation.

There are also major contingent events that shaped developments. The collapse of civilisation in 1177 BC would be a perfect storm. Caesar might not have crossed the Rubicon. His Alea iacta indicates that he took a calculated risk and the outcome might have been different. If the weather had been better then perhaps the Armada had conquered England and saved the world for Catholicism.

Thus we distinguish structure and relevant and irrelevant contingency.

Brexit came with such surprise that we are still discussing how it could have happened. It very much looks like a perfect storm. The 2016 referendum result has many curious aspects. The referendum question itself doesn’t fit the requirements of a scientifically warranted statistical questionnaire – and the British Electoral Commission doesn’t mind. Even in 2017 17% of UK voters put Remain between different options for Leave, and those of them who voted Leave in 2016 might not have voted so if their preferred option might not materialise (see here). Hannes Grassegger & Mikael Krogerus point to media manipulation. Referenda are instruments of populism, and the better model of democracy is representative democracy. Chris Patten rightly remarks that the UK House of Commons had more options than Theresa May suggests:

“The Brexit referendum last June was itself a disaster. A parliamentary democracy should never turn to such populist devices. Even so, May could have reacted to the 52 per cent vote to quit Europe by saying that she would hand the negotiations to a group of ministers who believed in this outcome and then put the result of the talks in due course to parliament and the people. Instead, she turned the whole of her government into a Brexit machine, even though she had always wished to remain in the EU. Her government’s motto is now “Brexit or bust.” Sadly, we will probably get both.”

Structural cause of Brexit

My take of the structural cause of Brexit is clarified by the following table. We distinguish Euro and Non-Euro countries versus the political structures of district representation (DR) and equal or proportional representation (EPR).

District representation (DR) Equal or proportional representation (EPR)
Euro France Holland (natural quota)
Germany (threshold 5%)
Non-Euro UK (Brexit) Sweden (threshold 4%)
Norway (non-EU, threshold 4%)

Update 2018-02-27: On the distinction between DR and EPR, there are: (1) this short overview of elementary statistics with an application to votes and seats, (2) a deconstruction of the disarray in the “political science on electoral systems” (1W1V), and (3) details on the suggestion for an inequality or disproportionality measure (SDID).

In the special Brexit edition of BJPIR, Helen Thompson discusses inevitability and contingency, and concludes that the position of the UK as a non-Euro country in a predominantly Eurozone EU became politically untenable.

  • For the voters in the UK, migration was a major issue. The world financial crisis of 2007+ and the contractionary policies of the Eurozone turned the UK into a “job provider of last resort”.
  • For the political elite, the spectre of the Euro doomed large. Given the theory of the optimal currency area, the Eurozone must further integrate or break up. The UK didn’t want to join the Euro and thus found itself at the fringe of the EU, in an increasing number of issues. With the increasing loss of power and influence on developments, more and more politicians saw less and less reason to participate.

Thompson regards the economic angle as a sufficient structural cause. My take is that it is only necessary, and that another necessary element is the form of parliamentarian representation. In my recent paper One woman, one vote. Though not in the USA, UK and France, with the focus on this parlementarian dimension, I forward the diagnosis that the UK political system is the main cause. Brexit is not the proof of a successful UK political system but proof of its failure.

  • The UK has district representation (DR). UKIP got 12.5% of the votes but only 1 seat in a house of 650 seats. David Cameron saw that crucial seats of his Conservatives were being challenged by UKIP. Such a threat may be amplified under DR. This explains Cameron’s political ploy to call a referendum.
  • If the UK had had equal or proportional representation (EPR), the UKIP protest vote could have been contained, and the UK would have had more scope to follow the example of Sweden (rather than Norway). Obviously, the elephant in the room of the optimal currency area for the Euro would not be resolved by this, but there would have been more time to find solutions. For example, the UK would have had a stronger position to criticise the wage moderation policies in Germany and Holland.
The structural cause of disinformation about representation

The 2007+ financial crisis highlighted irresponsible herd behaviour in economic science. Brexit highlights irresponsible herd behaviour in political science. Said paper One woman, one vote. Though not in the USA, UK and France (1W1V) shows that political science on electoral systems (on that topic specifically) is still pre-science, comparable to homeopathy, astrology and alchemy. Thus the UK finds itself in the dismal spot of being disinformed about democracy for decades.

The paper runs through the nooks and crannies of confusion and bias. At various points I was surprised by the subtleties of the particular madness. The paper is rather long but this has a clear explanation. When an argument has 100 aspects, and people understand 99% correctly and 1% wrongly, but everyone another 1%, in continuous fashion, then you really want the full picture if you want that all understand it.

But let me alert you to some points.

(1) The paper focuses on Carey & Hix (2011) on an “electoral sweet spot” of 3-8 seats per district. Particular to C&H is that they confuse “most frequent of good” with “the best“. The district magnitude of 3-8 seats appears most frequent in cases that satisfy their criteria for being good, and they turn this into the best. Since such DR would be best, say goodbye to EPR. But it is a confusion.

(2) They use fuzzy words like vote and election. But the words mean different things in DR or EPR. In DR votes are obliterated that EPR translates into seats. Using the same words for different systems, C&H suggest treatment on a par while there are strict logical differences. The Universal Declaration of Human Rights only fits with EPR. Science would use strict distinctions, like “vote in DR” and “vote in EPR”. Political science is still too close to colloquial language, and thus prone to confusion. Obviously I agree that it is difficult to define democracy, and that there are various systems, each with a historical explanation. But science requires clear terms. (See this Varieties of Democracy project, and check that they still have to do a lot too.)

(3) There is a statistical relationship between a measure of disproportionality (EGID) and a measure of the concentrated number of parties (CNP). C&H interprete the first as “interest-representation” and the latter as “accountability”. An interpretation is something else than a model. Using the statistical regularity, they claim to have found a trade-off relation between interest-representation and accountability. Instead, the scientific approach would be to try explain the statistical regularity for what it is. The suggested interpretation is shaky at best. One cannot use a statistical regularity as an argument on content and political principle (like One woman, one vote).

(4) They present a mantra, and repeat it, that there would be a trade-off between interest-representation and accountability. The best point [confusion] would be achieved at a district magnitude of 3-8 seats per district. However, they do not present a proper model and measure for accountability. My paper presents such a model, and shows that the mantra is false. Not DR but EPR is most accountable. EPR is obviously most interest-representative, so that there is no trade-off. Thus the C&H paper fails in the scientific method of modeling and measuring. It only has the method of repeating tradition and a mantra, with some magic of using interpretations. (Section 3.6 of 1W1V should start opening eyes of political scientists on electoral systems.)

(5) The C&H paper is the top of a line of research in “political science on electoral systems”. This paper fails and thus the whole line fails. Section 4.5 of 1W1V shows confusion and bias in general in political science on electoral systems, and the C&H paper is no exception to this.

The cure of Brexit

The cure of Brexit might well be that it just happens, and that we must learn to live with it. The EU lives with Norway while NATO has its Arctic training there.

Seen from the angle of the cause via the political structure, it may also be suggested that both France and the UK switch from DR to EPR, and that the newly elected UK House of Commons re-evaluates Brexit or Bregret. This switch may well cause the break-up of the parties of the Conservatives and Labour into Remain or Leave parties, but such would be the consequence of democracy and thus be fine by itself. We would no longer have Theresa May who was for Remain leading the Leavers and Jeremy Corbyn who was for Leave leading the Remainers. (For an indication, see here.) The other EU member states tend to stick to the Brexit deadline of March 29 2019, but when they observe the cause for Brexit and a new objective in the UK to deal with this (fateful) cause by switching to EPR, then this deadline might be shifted to allow the UK to make up its mind in a proper way.

Obviously, a UK switch to EPR is advisable in its own right, see said paper. It would also allow the new UK House of Commons to still adopt Brexit. The advantage of such an approach and decision would be that it would have the democratic legitimacy that is lacking now.

The relevant contingency of the Sovereignty Bill

Thompson’s article surprised me by her discussion of the 2010 UK Sovereignty Bill (that calls itself an Act). She calls it a “referendum lock”, and indeed it is. The Bill / Act states:

“2 Treaties. No Minister of the Crown shall sign, ratify or implement any treaty or law, whether by virtue of the prerogative powers of the Crown or under any statutory authority, which — (a) is inconsistent with this Act; or (b) increases the functions of the European Union affecting the United Kingdom without requiring it to be approved in a referendum of the electorate in the United Kingdom.”

The approach is comparable to the one in Ireland, in which EU treaties are subject to referenda too. In Holland, only changes in the constitution are subject to new elections and affirmation by the newly elected parliament, while treaties are exempt from this – and this is how the EU constitution of 2005 got rejected in a referendum but the Lisbon treaty got accepted in Dutch parliament. Currently a state commission is investigating the Dutch parliamentary system.

Thompson explains that the UK referendum lock had the perverse effect that EU leaders started to avoid the instrument of a treaty and started to use other ways to enact policies. For EU-minded Ireland, the instrument of a referendum was acceptable but for EU-skeptic UK the instrument was a poison pill. Why put much effort in negotiating a treaty if it could be rejected by the UK circus (partly created by its system of DR) ?

Thompson explains that while the referendum lock had been intended to enhance the UK position as a non-euro country w.r.t. eurozone UK, in effect it weakened Cameron’s position. The world noticed this and this weak position was fuel for the Brexiteers.

The relevant contingency of Thatcher’s policies

Brexit is mostly caused in the UK itself. Thompson doesn’t call attention to these relevant contingencies:

  • Margaret Thatcher started as pro-EU and even partook in the abolition of unanimity and the switch to qualified majority rule. My view is that it would have been wiser to stick to unanimity and be smarter in handling different speeds.
  • Secondly, Thatcher supported the neoliberal approach in economics that contributed to austerity and the deterioration of British industry that British voters blame the EU for. There was an obvious need for redress of earlier vulgar-Keynesian errors but there is no need to overdo it. My advice to the UK is to adopt EPR and see what can be learned from Holland and Sweden.
  • Thompson refers to her own 1996 book on the UK and ERM but doesn’t mention Bernard Connolly, his text The rotten heart of Europe and his dismissal from the EU Commission in 1995. At that time John Major had become prime minister and he did not defend Connolly’s position at the EU Commission. A country that is so easy on civil rights and free speech deserves the state that the UK is in. Surely the EU courts allowed the dismissal but this only means that one should look for better employment safeguards for critical views. Who wants to combine independent scientific advice and policy making, arrives at the notion of an  Economic Supreme Court, see below.
The relevant contingency of migration

I am reminded of the year 1988 at the Dutch Central Planning Bureau (CPB) when we looked at the Cecchini report. One criticism was that the report was too optimistic about productivity growth and less realistic on the costs of displaced workers. An observation by myself, though not further developed, was that, with more job mobility, people might prefer a single language barrier to a double one. People from the UK might move easier to Northern European countries that speak English well. People from the rest of Europe who have learned some English might prefer to go to the UK, to avoid having to deal with two other languages. I don’t know much about migration and I haven’t checked whether the UK has a higher share of it or not, and whether this language effect really matters. Given the role in the discussion it obviously would be a relevant contingency. Perhaps the UK and Ireland might claim a special position because of the language effect, and this might encourage other countries to switch to English too. But I haven’t looked into this.

The other elephant in the room

The other elephant in the room is my own analysis in political economy. It provides an amendment to Thompson’s analysis.

  • DRGTPE provides for a resolution of the Great Stagflation that we are in.
  • CSBH provides a supplement for the 2007+ crisis situation.
  • The paper Money as gold versus money as water (MGMW) provides an amendment to the theory of the optimal currency area: when each nation has its own Economic Supreme Court then countries might achieve the kind of co-ordination that is required. This is still a hypothesis but the EU has the option of integration, break up, or try such rational hypotheses. (The Van Rompuy roadmap might speed up integration too much with risk of a break-up.)

The main idea in DRGTPE was available in 1990 with the collection of background papers in 1992 (published by Guido den Broeder of Magnana Mu). Thus the EU might have had a different approach to EMU. The later edition of DRGTPE contains a warning about financial risk that materialised in 2007+. CSBH and MGMW provide a solution approach for the current problems.

If the EU would adopt such policies then there would be much less migration, since people would tend to prefer to remain at home (which is why I regard migration as a secondary issue and less in need for studying).

If the EU and UK would adopt such policies then there might still be Brexit or Bregret. Thus UK politicians might still prefer what they are now trying to find out what they prefer.

Conclusion

My impression is that the above gives a clear structural explanation for the UK decision for Brexit and an indication what contingent events were relevant. Knowing what caused helps to identify a cure. It is remarkable how large the role of denial in all of this is. Perhaps this story about the polar bear provides a way to deal with this huge denial (as polar elephants a.k.a. mammoths are already extinct).

Karl Pearson (1857-1936) is one of the founders of modern statistics, see this discussion by Stephen Stigler 2008 (and see Stigler’s The Seven Pillars of Statistical Wisdom 2016).

I now want to focus on Pearson’s 1897 paper Mathematical Contributions to the Theory of Evolution.–On a Form of Spurious Correlation Which May Arise When Indices Are Used in the Measurement of Organs.

The main theme is that if you use the wrong model then the correlations for that model will be spurious compared to the true model. Thus, Pearson goes at length to create a wrong model, and compares this to what he claims is the true model. It might be though that he still didn’t develop the true model. Apart from this complexity, it is only admirable that he points to the notion of such spurious correlation in itself.

One example in Pearson’s paper is the measurement of skulls in Bavaria (p495). The issue concerns compositional data, i.e. data vectors that add up to a given total, say 100%. The former entry on this weblog presented the inequality / disproportionality measure SDID for votes and seats. These become compositional data when we divide them by their sum totals, so that we compare 100% of the votes with 100% of the seats.

Pearson’s analysis got a sequel in the Aitchison geometry, see this historical exposition by Vera Pawlowsky-Glahn and Juan José Egozcue, The closure problem: one hundred years of debate. Early on, I was and still am a fan of the Aitchison & Brown book on the lognormal distribution but I have my doubts about the need of this particular geometry for compositional data. In itself the Aitchison geometry is a contribution, with a vector space, norm and inner product. When we transform the data to logarithms, then multiplication becomes addition, and powers become scalars, so that we can imagine such a vector space, yet, the amazing finding is that rebasing to 1 or 100% can be maintained. It is called “closure” when a vector is rebased to a constant sum. What, however, is the added value of using this geometry ?

It may well be that different fields of application still remain different on content, so that when they generate compositional data, then these data are only similar in form, while we should be careful in using the same techniques only because of that similar form. We must also distinguish:

  • Problems for compositional data that can be handled by both Sine / Cosine and the Aitchison geometry, but for which Sine and Cosine are simpler.
  • Problems for compositional data that can only be handled by the Aitchison geometry.

An example of the latter might be the paper by Javier Palarea-Albaladejo, Josep Antoni Martın-Fernandez and Jesus A. Soto (2012) in which they compare the compositions of milk of different mammals. I find this difficult to judge on content since I am no biologist. See the addendum below on the distance function.

In a fine overview by sheets, Pawlowsky-Glahn, Egozcue & Meziat 2007 present the following example, adapted from Aitchison. They compare two sets of soil samples, of which one sample is contaminated by water. If you want to spot the problem with this analysis yourself, take a try, and otherwise read on.

When the water content in the sample of A is dropped, then the test scores are rebased to the total of 100% for B again. E.g. for the 60% water in sample 1, this becomes:

{0.1, 0.2, 0.1} / (0.1 + 0.2 + 0.1) = {0.25, 0.5, 0.25}

PM. A more complex example with simulation data is by David Lovell.

Reproduction of this example

It is useful to first reproduce the example so that we can later adapt it.

In Wolfram Alpha, we can reproduce the outcome as follows.

For A, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; Correlation[mat1] // Chop // MatrixForm.

For B, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Correlation[mat2] // Chop // MatrixForm.

The confusion about the correlation

In the former weblog entry, we had SDID[v, s] for the votes v and seats s. In this way of thinking, we would reason differently. We would compare (correlate) rows and not columns.

There is also a difference that correlation uses centered data while Sine and Cosine use original or non-centered data. Perhaps this contributed to Pearson’s view.

One possibility is that we compare sample 1 according to A with sample 1 according to B, as SDID[1A*, 1B]. Since the measures of A also contain water, we must drop the water content and create A*. The assumption is that and are independent measurements, and that we want to see whether they generate the same result. When the measurements are not affected by the content of water, then we would find zero inequality / disproportionality. However, Pawlowsky et al. do not state the problem as such.

The other possibility is that we would compare SDID[sample i, sample j].

Instead of using SDID for inequality / disproportionality, let us now use the cosine as a measure for similarity.

For A, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; Outer[cos, mat1, mat1, 1] // Chop // MatrixForm.

Since the water content is not the same in all samples, above scores will be off. To see whether these similarities are sensitive to the contamination by the water content, we look at the samples according to B.

The input code for Wolfram Alpha is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Outer[cos, mat2, mat2, 1] // Chop // MatrixForm.

Since the water content differed so much per sample, and apparently is not considered to be relevant for the shares of the other components, the latter matrix of similarities is most relevant.

If we know that the samples are from the same soil, then this would give an indication of sample variability. Conversely, we might have information about the dispersion of samples, and perhaps we might determine whether the samples are from the same soil.

Obviously, one must have studied soil samples to say something on content. The above is only a mathematical exercise. This only highlights the non-transposed case (rows) versus the transposed case (columns).

Evaluation

Reading the Pearson 1897 paper shows that he indeed looks at the issue from the angle of the columns, and that he considers calibration of measurements by switching to relative data. He gives various examples, but let me show the case of skull measurement, that may still be a challenge:

Pearson presents two correlation coefficients for B / L with H / L. One based upon the standard definition (that allows for correlations between the levels), and one baptised “spurious”, based upon the assumption of independent distributions (and thus zero correlations for the levels). Subsequently he throws doubt on the standard correlation because of the high value of the spurious correlation.

One must be a biologist or even a skull-specialist to determine whether this is a useful approach. If the true model would use relative data with zero correlations, what is the value of the assumptions of zero or nonzero correlations for the absolute values ? What is useful depends upon the research question too. We can calculate all kinds of statistics, but what decision is intended ?

It is undoubtedly a contribution by Pearson that looking at phenomena in this manner can generate what he calls “spurious correlation”. Whatever the model, it is an insight that using the wrong model can create spurious correlation and a false sense of achievement. I would feel more comfortable though when Pearson had also mentioned the non-transposed case, which I would tend to regard as the proper model, i.e. comparing skulls rather than correlating categories on skulls. Yet he doesn’t mention it.

Apparently the Aitchison geometry provides a solution to Pearson’s approach, thus still looking at transposed (column) data. This causes the same discomfort.

Pro memori

The above uses soil and skulls, which are not my expertise. I am more comfortable with votes and seats, or budget shares in economics (e.g. in the Somermeyer model or the indirect addilog demand system, Barten, De Boer).

Conclusion

Pearson was not confused on what he defined as spurious correlation. He might have been confused about the proper way to deal with compositional data, namely looking at columns rather than rows. This however also depends upon the field of interest and the research question. Perhaps a historian can determine whether Pearson also looked at compositional data from rows rather than columns.

Addendum November 23 2017

For geological data, Watson & Philip (1989) already discussed the angular distance. Martin-Fernandez, Barcelo-Vidal, Pawlowsky-Glahn (2000), “Measures of differences for compositional data and hierarchical clustering methods“, discuss distance measures. They also mention the angle between two vectors, found via arccos[cos[v, s]], for votes v and seats s. It is mentioned in the 2nd row of their Table 1. The vectors can also normalised to the unit simplex as w = v / Sum[v] and z = s / Sum[s], though cos is insensitive with cos[w, z] = cos[v, s].

In sum, the angular distance, or the use of the sine as a distance measure and the cosine as a similarity measure, satisfy the Aitchison criteria of invariance to scale and permutation, but do not satisfy subcompositional dominance and invariance to translation (perturbation).

This discussion makes me wonder whether there are still key differences between compositional data in terms of concepts. The compositional form should not distract us from the content. For a Euclidean norm, a translation leaves a distance unaffected, as Norm[x – y] = Norm[(x +t) – (y + t)}. This property can by copied for logratio data. However, for votes and seats, it is not clear why a (per party different) percentage change vector should leave the distance unaffected (as happens in logratio distance).

An election only gives votes and seats. Thus there is no larger matrix of data. Comparison with other times and nations has limited meaning. Thus there may be no need for the full Aitchison geometry.

At this moment, I can only conclude that Sine (distance) and Cosine (similarity) are better for votes and seats than what political scientists have been using till now. It remains to be seen for votes and seats whether the logratio approach would be better than the angular distance and the use of Sine and Cosine.

The following applies to elections for Parliament, say for the US House of Representatives or the UK House of Commons, and it may also apply for the election of a city council. When the principle is one man, one vote then we would want that the shares of “seats won” would be equal to the shares of “votes received”. When there are differences then we would call this inequality or disproportionality.

Such imbalance is not uncommon. At the US election of November 8 2016, the Republicans got 49.1% of the votes and 55.4% of the seats, while the Democrats got 48% of the votes and 44.6% of the seats. At the UK general election of June 8 2017, the Conservatives got 42.2% of the votes and 48.8% of the seats while Labour got 39.9% of the votes and 40.3% of the seats (the wikipedia data of October 16 2017 are inaccurate).

This article clarifies a new and better way to measure this inequality or disproportionality of votes and seats. The new measure is called Sine-Diagonal Inequality / Disproportionality (SDID) (weblink to main article). The new measure falls under descriptive statistics. Potentially it might be used in any area where one matches shares or proportions, like the proportions of minerals in different samples. SDID is related to statistical concepts like R-squared and the regression slope. This article looks at some history, as Karl Pearson (1857-1936) created the R-Squared and Ronald A. Fisher (1890-1962) in 1915 determined its sample distribution. The new measure would also be relevant for Big Data. William Gosset (1876-1937) a.k.a. “Student” was famously unimpressed by Fisher’s notion of “statistical significance” and now is vindicated by descriptive statistics and Big Data.

The statistical triad

Statistics has the triad of Design, Description and Decision.

  • Design is especially relevant for the experimental sciences, in which plants, lab rats or psychology students are subjected to alternate treatments. Design is informative but less applicable for observational sciences, like macro-economics and national elections when the researcher cannot experiment with nations.
  • Descriptive statistics has measures for the center of location – like mean or median – and measures of dispersion – like range or standard deviation. Important are also the graphical methods like the histogram or the frequency polygon.
  • Statistical decision making involves the formulation of hypotheses and the use of loss functione to evaluate that hypotheses. A hypothesis on the distribution of the population provides an indication for choosing the sample size. A typical example is the definition of decision error (of the first kind) that a hypothesis is true but still rejected. One might accept a decision error in say 5% of the cases, called the level of statistical significance.

Historically, statisticians have been working on all these areas of design, description and decision, but the most difficult was the formulation of decision methods, since this involved both the calculus of reasoning and the more complex mathematics on normal, t, chi-square, and other frequency distributions. In practical work, the divide between the experimental and the non-experimental (observational) sciences appeared insurmountable. The experimental sciences have the advantages of design and decisions based upon samples, and the observational sciences basically rely on descriptive statistics. When the observational sciences do regressions, there is an ephemeral application of statistical significance that invokes the Law of Large Numbers, that all error approximates the normal distribution.

This traditional setup of statistics is being challenged in the last decades by Big Data – see also this discussion by Rand Wilcox in Significance May 2017. When all data are available, and when you actually have the population data, then the idea of using a sample evaporates, and you don’t need to develop hypotheses on the distributions anymore. In that case descriptive statistics becomes the most important aspect of statistics. For statistics as a whole, the emphasis shifts from statistical decision making to decisions on content. While descriptive statistics had been applied mostly to samples, Big Data now causes the additional step how these descriptions relate to decisions on content. In fact, such questions already existed for the observational sciences like for macro-economics and national elections, in which the researcher only had descriptive statistics, and lacked the opportunity to experiment and base decisions upon samples. The disadvantaged areas now provide insights for the earlier advantaged areas of research.

The key insight is to transform the loss function into a descriptive statistic itself. An example is the Richter scale for the magnitude of earthquakes. It is both a descriptive statistic and a factor in the loss function. A nation or regional community has on the one hand the cost of building and construction and on the other hand the risk of losing the entire investments and human lives. In the evaluation of cost and benefit, the descriptive statistic helps to clarify the content of the issue itself. The key issue is no longer a decision within statistical hypothesis testing, but the adequate description of the data so that we arrive at a better cost-benefit analysis.

Existing measures on votes versus seats

Let us return to the election for the House of Representatives (USA) or the House of Commons (UK). The criterion of One man, one vote translates into the criterion that the shares of seats equal the shares of votes. We are comparing two vectors here.

The reason why the shares of seats and votes do not match is because the USA and UK use a particular setup. The setup is called an “electoral system”, but since it does not satisfy the criterion of One man, one vote, it does not really deserve that name. The USA and UK use both (single member) districts and the criterion of Plurality per district, meaning that the district seat is given to the candidate with the most votes – also called “first past the post” (FPTP). This system made some sense in 1800 when the concern was district representation. However, when candidates stand for parties then the argument for district representation loses relevance. The current setup does not qualify for the word “election” though it curiously continues to be called so. It is true that voters mark ballots but that is not enough for a real election. When you pay for something in a shop then this is an essential part of the process, but you also expect to receive what you ordered. In the “electoral systems” in the USA and UK, this economic logic does not apply. Only votes for the winner elect someone but the other votes are obliterated. For such reasons Holland switched to equal / proportional representation in 1917.

For descriptive statistics, the question is how to measure the deviation of the shares of votes and seats. For statistical decision making we might want to test whether the US and UK election outcomes are statistically significantly different from inequality / proportionality. This approach requires not only a proper descriptive measure anyway, but also some assumptions on the distribution of votes which might be rather dubious to start with. For this reason the emphasis falls on descriptive statistics, and the use of a proper measure for inequality / disproportionality (ID).

A measure proposed by, and called after, Loosemore & Hanby in 1971 (LHID) uses the sum of the absolute deviations of the shares (in percentages), divided by 2 to correct for double counting. The LHID for the UK election of 2017 is 10.5 on a scale of 100, which means that 10.5% of the 650 seats (68 seats) in the UK House of Commons are relocated from what would be an equal allocation. When the UK government claims to have a “mandate from the people” then this is only because the UK “election system” is so rigged that many votes have been obliterated. The LHID gives the percentage of relocated seats but is insensitive to how these actually are relocated, say to a larger or smaller party.

The Euclid / Gallagher measure proposed in 1991 (EGID) uses the Euclidean distance, again corrected for double counting. For an election with only two parties EGID = LHID. The EGID has become something like the standard in political science. For the UK 2017 the EGID is 6.8 on a scale of 100, which cannot be interpreted as a percentage of seats like LHID, but which indicates that the 10.5% of relocated seats are not concentrated in the Conservative party only.

Alan Renwick in 2015 tends to see more value in LHID than EGID: “As the fragmentation of the UK party system has increased over recent years, therefore, the standard measure of disproportionality [thus EGID] has, it would appear, increasingly understated the true level of disproportionality.”

The new SDID measure

The new Sine-Diagonal Inequality / Disproportionality (SDID) measure – presented in this paper – looks at the angle between the vectors of the shares of votes and seats.

  • When the vectors overlap, the angle is zero, and then there is perfect equality / proportionality.
  • When the vectors are perpendicular then there is full inequality / disproportionality.
  • While this angle variates from 0 to 90 degrees, it is more useful to transform it into sine and cosine that are in the [0, 1] range.
  • The SDID takes the sine for inequality / disproportionality and the cosine of the angle for equality / proportionality.
  • With Sin[0] = 0 and Cos[0] = 1, we thus get a scale that is 0 for full inequaliy / disproportionality and 1 for full equality / proportionality.

It appears that the sine is more sensitive than either absolute value (LHID) and Euclidean distance (EGID). It is closer to the absolute value for small angles, and closer to the Euclidean distrance for larger angles. See said paper, Figure 1 on page 10. SDID is something like a compromise between LHID and EGID but also better than both.

The role of the diagonal

When we regress the shares of the seats on the shares of the votes without using a constant – i.e. using Regression Through the Origin (RTO) – then this gives a single regression coefficient. When there is equality / proportionality then this regression coefficient is 1. This has the easy interpretation that this is the diagonal in the votes & seats space. This explains the name of SDID: when the regression coefficient generates the diagonal, then the sine is zero, and there is no inequality / disproportionality.

Said paper – see page 38 – recovers a key relationship between on the one hand the sine and on the other hand the Euclidean distance and this regression coefficient. On the diagonal, the sine and Euclidean distance are both zero. Off-diagonal, the sine differs from the Euclidean distance in nonlinear manner by means of a factor given by the regression coefficient. This relationship determines the effect that we indicated above, how SDID compromises between and improves upon LHID and EGID.

Double interpretation as slope and similarity measure

There appears to be a relationship between said regression coefficient and the cosine itself. This allows for a double interpretation as both slope and similarity measure. This weblog text is intended to avoid formulas as much as possible and thus I refer to said paper for the details. Suffice to say here is that, at first, it may seem to be a drawback that such a double interpretation is possible, yet, on closer inspection the relationship makes sense and it is an advantage to be able to switch perspective.

Weber – Fechner sensitivity, factor 10, sign

In human psychology there appears to be a distinction between actual differences and perceived differences. This is called the Weber – Fechner law. When a frog is put into a pan with cool water and slowly boiled to death, it will not jump out. When a frog is put into a pan with hot water it will jump out immediately. People may notice differences between low vote shares and high seat shares, but they may be less sensitive to small differences, while these differences actually can still be quite relevant. For this reason, the SDID uses a sensitivity transform. It uses the square root of the sine.

(PM. A hypothesis is that the USA and UK call their national “balloting events” still “elections”, is that the old system of districts has changed so gradually into the method of obliterating votes that many people did not notice. It is more likely though that that some parties recognised the effect, but have an advantage under the present system, and then do not want to change to equal / proportional representation.)

Subsequently, the sine and its square root have values in the range [0, 1]. In itself this is an advantage, but it comes with leading zeros. We might multiply with 100 but this might cause the confusion as if it would be percentages. The second digit might give a false sense of accuracy. It is more useful to multiply this by 10. This gives values like on a report card. We can compare here to Bart Simpson, who appreciates low values on his report card.

Finally, when we compare, say, votes {49, 51} and seats {51, 49}, then we see a dramatic change of majority, even though there is only a slight inequality / disproportionality. It is useful to have an indicator for this too. It appears that this can be done by using a negative sign when such majority reversal occurs. This method of indicating majority reversals is not so sophisticated yet, and at this stage consists of using the sign of the covariance of the vectors of votes and seats.

In sum: the full formula

This present text avoids formulas but it is useful to give the formula for the new measure of SDID, so that the reader may link up more easily with the paper in which the new measure is actually developed. For the vectors of votes and seats we use the symbols v and s, and the angle between the two vectors give cosine and then sine:

SDID[v, s] = sign 10 √ Sin[v, s]

For the UK 2017, the SDID value is 3.7. For comparison the values of Holland with equal / proportional representation are: LHID 3, EGID 1.7, SDID 2.5. It appears that Holland is not yet as equal / proportional as can be. Holland uses the Jefferson / D’Hondt method, that favours larger parties in the allocation of remainder seats. At elections there are also the wasted vote, when people vote for fringe parties that do not succeed in getting seats. In a truly equal or proportional system, the wasted vote can be respected by leaving seats empty or by having a qualified majority rule.

Cosine and R-squared

Remarkably, Karl Pearson (1857-1936) also used the cosine when he created R-squared, also known as the “coefficient of determination“. Namely:

  • R-squared is the cosine-squared applied to centered data. Such centered data arise when one subtracts the mean value from the original data. For such data it is advisable to use a regression with a constant, which constant captures the mean effect.
  • Above we have been using the original (non-centered) data. Alternatively put, when we do above Regression Through the Origin (RTO) and then look for the proper coefficient of determination, then we get the cosine-squared.

The SDID measure thus provides a “missing link” in statistics between centered and non-centered data, and also provides a new perspective on R-squared itself.

Apparently till now statistics found little use for original (non-centered) data and RTO. A possible explanation is that statistics fairly soon neglected descriptive statistics as less challenging, and focused on statistical decision making. Textbooks prefer the inclusion of a constant in the regression, so that one can test whether it differs from zero with statistical significance. The constant is essentially used as an indicator for possible errors in modeling. The use of RTO or the imposition of a zero constant would block that kind of application. However, this (traditional, academic) focus on statistical decision making apparently caused the neglect of a relevant part of the analysis, that now comes to the surface.

R-squared has relatively little use

R-squared is often mentioned in statistical reports about regressions, but actually it is not much used for other purposes than reporting only. Cosma Shalizi (2015:19) states:

“At this point, you might be wondering just what R-squared is good for — what job it does that isn’t better done by other tools. The only honest answer I can give you is that I have never found a situation where it helped at all. If I could design the regression curriculum from scratch, I would never mention it. Unfortunately, it lives on as a historical relic, so you need to know what it is, and what misunderstandings about it people suffer from.”

At the U. of Virginia Library, Clay Ford summarizes Shalizi’s points on the uselessness of R-squared, with a reference to his lecture notes.

Since the cosine is symmetric, the R-squared is the same for regressing y given x, or x given y. Shalizi (2015, p18) infers from the symmetry: “This in itself should be enough to show that a high R² says nothing about explaining one variable by another.” This is too quick. When theory shows that x is a causal factor for y then it makes little sense to argue that y explains x conversely. Thus, for research the percentage of explained variation can be informative. Obviously it matters how one actually uses this information.

When it is reported that a regression has an R-squared of 70% then this means that 70% of the variation of the explained variable is explained by the model, i.e. by variation in the explanatory variables and the estimated coefficients. In itself such a report does not say much, for it is not clear whether 70% is a little or a lot for the particular explanation. For evaluation we obviously also look at the regression coefficients.

One can always increase R-squared by including other and even nonsensical variables. For a proper use of R-squared, we would use the adjusted R-squared. R-adj finds its use in model specification searches – see Dave Giles 2013. For an increase of R-adj coefficients must have an absolute t-value larger than 1. A proper report would show how R-adj increases by the inclusion of particular variables, e.g. also compared to studies by others on the same topic.  Comparison on other topics obviously would be rather meaningless. Shalizi also rejects R-adj and suggests to work directly with the mean squared error (MSE, also corrected for the degrees of freedom). Since R-squared is the cosine, then the MSE relates to the sine, and these are basically different sides of the same coin, so that this discussion is much a-do about little. For standardised variables (difference from mean, divided by standard deviation), the R-squared is also the coefficient of regression, and then it is relevant for the effect size.

R-squared is a sample statistic. Thus it depends upon the particular sample. A hypothesis is that the population has a ρ-squared. For this reason it is important to distinguish between a regression on fixed data and a regression in which the explanatory variables also have a (normal) distribution (errors in variables). In his 1915 article on the sample distribution of R-squared. R.A Fisher (digital library) assumed the latter. With fixed data, say X, the outcome is conditional on X, so that it is better to write ρ[X], lest one forgets about the situation. See my earlier paper on the sample distribution of R-adj. Dave Giles has a fine discussion about R-squared and adjusted R-squared. A search gives more pages. He confirms the “uselessnes” of R-squared: “My students are often horrified when I tell them, truthfully, that one of the last pieces of information that I look at when evaluating the results of an OLS regression, is the coefficient of determination (R2), or its “adjusted” counterpart. Fortunately, it doesn’t take long to change their perspective!” Such a statement should not be read as the uselessness of cosine or sine in general.

A part of history of statistics that is unknown to me

I am not familiar with the history of statistics, and it is unknown to me what else Pearson, Fisher, Gosset and other founding and early authors wrote about the application of the cosine or sine. The choice to apply the cosine to centered data to create R-squared is deliberate, and Pearson would have been aware that it might also be applied to original (non-centered) data. It is also likely that he would not have the full perspective above, because then it would have been in the statistical textbooks already. It would be interesting to know what the considerations at time were. Quite likely the theoretical focus was on statistical decision making rather than on description, yet this for me unknown history would put matters more into perspective.

Statistical significance

Part of the history is that R.A. Fisher with his attention for mathematics emphasized precision while W.S. Gosset with his attention to practical application emphasized the effect size of the coefficients found by regression. Somehow, statistical significance in terms of precision became more important than content significance, and empirical research has rather followed Fisher than the practical relevance of Gosset. This history and its meaning is discussed by Stephen Ziliak & Deirdre McCloskey 2007, see also this discussion by Andrew Gelman. As said, for standardised variables, the regression coefficient is the R-squared, and this is best understood with attention for the effect size. For some applications a low R-squared would still be relevant for the particular field.

Conclusion

The new measure SDID provides a better description of the inequality or disproportionality of votes and seats compared to existing measures. The new measure has been tailored to votes and seats, by means of greater sensitivity to small inequalities, and because a small change in inequality may have a crucial impact on the (political) majority. For different fields, one could taylor measures in similar manner.

That the cosine could be used as a measure of similarity has been well-known in the statistics literature since the start, when Pearson used the cosine for centered data to create R-square. For the use of the sine I have not found direct applications, but its use is straightforward when we look at the opposite of similarity.

The proposed measure provides an enlightening bridge between descriptive statistics and statistical decision making. This comes with a better understanding of what kind of information the cosine or R-squared provides, in relation to regressions with and without a constant. Statistics textbooks would do well by providing their students with this new topic for both theory and practical application.

This weblog entry copies the earlier entry that used an estimate.
Now we use the actual YouGov data, below.
Again we can thank YouGov and Anthony Wells for making these data available.
The conclusions do not change, since the estimate apparently was fairly good.
It concerns a very relevant poll, and it is useful to have the uncertainty of the estimate removed.

The earlier discussion on Proportional Representation versus District Representation has resulted in these two papers:

Brexit stands out as a disaster of the UK First Past The Post (FPTP) system and the illusion that one can use referenda to repair disproportionalities caused by FPTP. This information about the real cause of Brexit is missing in the otherwise high quality overview at the BBC.

The former weblog text gave an overview of the YouGov polling data of June 12-13 2017 on the Great Britain (UK minus Northern Ireland) preference orderings on Brexit. The uncertainty of the estimate is removed now, and we are left with the uncertainty because of having polling data. The next step is to use these orderings for the various voting philosophies. I will be using the website of Rob LeGrand since this makes for easy communication. See his description of the voting philosophies. Robert Loring has a website that referred to LeGrand, and Loring is critical about FPTP too. However, I will use the general framework of my book “Voting theory for democracy” (VTFD), because there are some general principles that many people tend to overlook.

Input format

See the former entry for the problem and the excel sheet with the polling data of the preferences and their weights. LeGrand’s website requires us to present the data in a particular format. It seems best to transform the percentages into per-millions, since that website seems to require integers and we want some accuracy even though polling data come with uncertainty. There are no preferences with zero weights. Thus we get 24 nonzero weighted options. We enter those and then click on the various schemes. See the YouGov factsheet for the definition of the Brexit options, but for short we have R = Remain, S = Soft / Single Market, T = Tariffs / Hard, N = No Deal / WTO. Observe that the Remain options are missing, though these are important too.

248485:R>S>T>N
38182:R>S>N>T
24242:R>T>S>N
19394:R>T>N>S
12727:R>N>S>T
10909:R>N>T>S
50303:S>R>T>N
9091:S>R>N>T
22424:S>T>R>N
66667:S>T>N>R
9091:S>N>R>T
36364:S>N>T>R
6667:T>R>S>N
3636:T>R>N>S
12121:T>S>R>N
46667:T>S>N>R
15758:T>N>R>S
135152:T>N>S>R
9697:N>R>S>T
9091:N>R>T>S
8485:N>S>R>T
37576:N>S>T>R
16970:N>T>R>S
150303:N>T>S>R

Philosophy 1. Pareto optimality

The basic situation in voting has a Status Quo. The issue on the table is that we consider alternatives to the Status Quo. Only those options are relevant that are Pareto Improving, i.e. that some advance while none lose. Commonly there are more Pareto options, whence there is a deadlock that Pareto itself cannot resolve, and then majority voting might be used to break the deadlock. Many people tend to forget that majority voting is mainly a deadlock breaking rule. For it would not be acceptable when a majority would plunder a minority. The Pareto condition thus gives the minority veto rights against being plundered.

(When voting for a new Parliament then it is generally considered no option to leave the seats empty, whence there would be no status quo. A situation without a status quo tends to be rather exceptional.)

In this case the status quo is that the UK is a member of the EU. The voters for R block a change. The options S, T and N do not compensate the R. Thus the outcome remains R.

This is the fundamental result. The philosophies in the following neglect the status quo and thus should not really be considered.

PM 1. Potentially though, the S, T and N options must be read such that the R will be compensated for their loss.

PM 2. Potentially though, Leavers might reason that the status quo concerns national sovereignty, that the EU breaches upon. The BBC documentary “Europe: ‘Them’ or ‘Us’” remarkably explains that it was Margaret Thatcher who helped abolish the UK veto rights and who accepted EU majority rule, and who ran this through UK Parliament without proper discussion. There seems to be good reason to return to unanimity rule in the EU, yet it is not necessarily a proper method to neglect the rights of R. (And it was Thatcher who encouraged the neoliberal economic policies that many UK voters complain about as if these would come from the EU.)

Philosophy 2. Plurality

On LeGrand’s site we get Plurality as the first step in the Hare method. gets 35% while the other options are divided with each less than 35%. Thus the outcome is R.

(The Brexit referendum question in 2016 was flawed in design e.g. since it hid the underlying disagreements, and collected all dissent into a single Leave, also sandwiching R between various options for Leave.)

Philosophy 3. Hare, or Instant Run-off, a form of Single Transferable Vote (STV)

When we continue with Hare, then R remains strong and it collects votes when S and N drop off (as it is curiously sandwiched between options for Leave). Eventually R gets 45.0% and T gets 55.0%. Observe that this poll was on June 12-13 2017, and that some 25% of the voters “respect” the 2016 referendum outcome that however was flawed in design. I haven’t found information about preference orderings at the time of the referendum.

Philosophy 4. Borda

Borda generates the collective ranking S > T > R > N. This is Case 9 in the original list, and fortunately this is single-peaked.

Philosophy 5. Condorcet (Copeland)

Using Copeland, we find that S is also the Condorcet winner, i.e. wins from each other option in pairwise contests. This means that S is also the Borda Fixed Point winner.

Conclusions

The major point of this discussion is that the status quo consists of the UK membership of the EU. Part of the status quo is that the UK may leave by invoking article 50. However, the internal process that caused the invoking of article 50 leaves much to be desired. Potentially many voters got the suggestion as if they might vote about membership afresh without the need to compensate those who benefit from Remain.

Jonathan Portes suggested in 2016 that the Brexit referendum question was flawed in design because there might be a hidden Condorcet cycle. The YouGov poll didn’t contain questions that allows to check this, also because much has happened in 2016-2017, including the misplaced “respect” by 25% of the voters for the outcome of a flawed referendum. A key point is that options for Remain are not included, even though they would be relevant. My impression is that the break-up of the UK would be a serious issue, even though, curiously, many Scots apparently rather prefer the certainty of the closeness to a larger economy of the UK rather than the uncertainties of continued membership of the EU when the UK is Leaving.

It would make sense for the EU to encourage a reconsideration within the UK about what people really want. The Large Hadron Collider is expensive, but comparatively it might be less expensive when the UK switches to PR, splits up its confused parties (see this discussion by Anthony Wells), and has a new vote for the House of Commons. The UK already has experience with PR namely for the EU Parliament, and it should not be too complex to use this approach also for the nation.

Such a change might make it also more acceptable for other EU member states if the UK would Breget. Nigel Farage much benefited from Proportional Representation (PR) in the EU Parliament, and it would be welcome if he would lobby for PR in the UK too.

Nevertheless, given the observable tendency in the UK to prefer a soft Brexit, the EU would likely be advised to agree with such an outcome, or face a future with a UK that rightly or wrongly feels quite maltreated. As confused as the British have been on Brexit, they might also be sensitive to a “stab-in-the-back myth”.