Archive

Tag Archives: education in mathematics

Many people think that political science on electoral systems and referenda must be a science since otherwise it would not be called a science. Unfortunately, the label “political science” got coined around 1903 with the creation of the American Political Science Association (APSA), and this label rather reflects an aspiration and no achievement yet. In the UK there is the Political Studies Association (PSA), founded in 1950, baptised more modestly since there still is much scholarship in the humanities. It turns out that many statements by “political science / studies on electoral systems and referenda” aren’t scientific, and for their relevance for empirical reality they can only be compared to astrology, alchemy or homeopathy. A scientist looking at a UK General Election can only think “Garbage in, garbage out” (GIGO).

The UK has been fundamentally disinformed about its electoral system with district representation and the use of referenda like the Brexit Referendum of 2016. The UK is locked in tradition and fuzzy thinking in the humanities. The situation may be explained by the historical path that the UK has taken, but this history hasn’t included a proper application of science to the notion of democracy.

Compare the current chaos w.r.t. Brexit to the chaos with the financial crisis of 2008. On the latter, the UK Queen asked famously:

“Why did nobody notice it?”

There is a longer list of economists who issued warnings in time, with Hyman Minsky at the top and me somewhere too. The next question rather is why such warnings weren’t taken seriously in the policy making process. My diagnosis since 1990 is that there is a failure of the separation of powers, the Trias Politica, with still too much room for politicians to manipulate the information. The remedy is to create an Economic Supreme Court (ESC) that will guard the quality of information for policy. The House of Commons would still determine policy but it would get less room to disinform the public. The current UK Office for Budget Responsibility (OBR) is a far cry from what is actually needed.

With this analogy established, consider Brexit again. Might the Queen not repeat the question ? Now however there doesn’t seem to be a list of early warnings that were overlooked. Now we have a “political science” that has gotten lost in abstraction. Here, the remedy is to ask proper scientists from physics to biology to psychometrics to econometrics to look at democracy and to help “political science / studies on electoral systems” become a proper science too. My suggestion is to team up empirical scientists from the Royal Society with members of the PSA and the British Academy, and to encourage a buddy-system to start delving into this. The place to start is my paper “One woman, one vote. Though not in the USA, UK and France” at MPRA 2018, and a presentation 1270381 at Zenodo.org on the distance between votes and seats.

Many people think that the Brexit Referendum of 2016 allowed voters to express their decision, with 52% Leave and 48% Remain. However, not all voters expressed their decision but many were only guessing. A YouGov poll at the time of the GIGO 2017 showed that 17% of voters still listed Remain between different options of Leave. Voters were forced to make a strategic choice about what they feared most what might happen. See my deconstruction of this mess in the October 2017 Newsletter of the Royal Economic Society (RES).

Now there are calls for a second referendum. This call wants to resolve the current chaos by creating more chaos, and potentially a “stab in the back” myth that the 2016 supposed decision isn’t listened to. The lesson from the current chaos should rather be that referenda are generally dumb and dangerous, even in the form of the neverendum. The real problem lies in the UK system of district representation that structurally fails to reflect the views and interests of voters. The deeper problem is that the House of Commons and the electorate are disinformed by an academic field that still is comparable to astrology, alchemy or homeopathy. There is a grand scale of disinformation by famous UK scholars like Iain McLean, John Curtice, younger Alan Renwick, and (other) members of PSA.

My suggestion is that the UK switches to equal proportional representation (EPR), say adopt the Dutch system of open lists (in which you may always vote for a regional candidate though people don’t tend to do so), has proper elections, and then let the new House of Commons discuss the relation with the EU again. It is not unlikely that the EU would allow the UK the time for such a fundamental reconsideration on both its democracy and Brexit. UK political parties may need to split up to offer voters the relevant spectrum of views, though one must allow for election alliances (especially the former Dutch method of list combinations). To some readers this suggestion may remind of earlier discussions about district or proportional representation (DR vs EPR). However, there now is the key new insight about the disinformation by the “political science / studies on electoral systems”, that causes the need to re-evaluate what has been claimed in the past by the academic ivory towers, and also by the disinforming UK Electoral Reform Society (ERS). It remains to be seen whether the UK would want to switch from DR to EPR, but the first step would be to provide the public with proper information.

PS. An eye-opener can be that “political science on electoral systems” relies upon common language instead of developed definitions. Physics also borrowed common words like “force” and “mass”, yet it provided precise definitions, and gravity in Holland has the same meaning as gravity in the UK. The “political science on electoral systems” uses the same word “election” but an “election” in Holland with EPR is entirely different from an “election” in the UK with DR. In reality there is a difference between a contest (DR) or a bundling of votes to support a representative (EPR). We find that the UK is locked into confusion by its vocabulary. An analogy is the following. Consider the medieval trial by combat or the “judgement of God”, that persisted into the phenomenon of dueling to settle conflicts. A duel was once seriously seen as befitting of the words “judgement” and “trial”. Eventually civilisation gave the application of law with procedures in court. Using the same words “judgement” and “trial” for both a duel and a court decision confuses what is really involved, though the outward appearance may look the same, that only one party passes the gate. The UK suffers the same kind of confusion about the “General Election for the House of Commons” when this actually is no proper election of interest representatives but concerns contests for getting district winners. The system of DR is proto-democratic and no proper democracy that uses EPR.

Picture: Wikimedia Queen in the UK, Duel in France, Judges in The Hague.

Advertisements

Let us look beyond Brexit and determine the implications w.r.t. democracy itself. We can conclude that the UK has an intellectual community that is quite blind on the very notion of democracy. When the educated run astray then there is only an anchor in the democratic notions of the whole population, and this opens the doors to what is called “populism”.

I started looking into Brexit after the surprise referendum outcome in 2016. This memo sums up my findings over the last two years. The following identifies where the educated community in the UK is in need of re-educating themselves.

Earlier in 1990-1994 I already concluded that Montesquieu’s model of the separation of powers of the Trias Politica fails in a key aspect since its conception in 1748. Democracies need the fourth power of an Economic Supreme Court, see (2014). It is necessary to mention this earlier conclusion that predates Brexit, but let us now continue with findings following Brexit.

To start with: What does the UK electorate really want w.r.t. Brexit or Bremain ? Both the Referendum of 2016 and the General Election of 2017 do not provide adequate information. One would think that it is rather damning for a claimed democracy when its procedures do not result into adequate clarity on such a fundamental issue.

The 2016 Referendum Question concerned the legal issue of Leave or Remain but was disinformative about the ways of Leaving or Remaining. The political parties that are elected into the House of Commons are split on both direction and ways as well. The overall situation can only be described as chaotic. One might try to characterise this more positively as that a population with divided views generated a House of Commons with divided views, which would be democracy itself, but this neglects that there is no information about what those divided views actually are. The true process is “garbage in, garbage out” and this doesn’t fit the definition of democracy.

The very Brexit or Bremain Referendum Question fails the criteria for a decent statistical enquiry. I am surprised that the Royal Statistical Society (RSS) did not protest. The question of Leave or Remain is a binary legal issue but the true issue are the policy options. It took some time to analyse this, but with the help of Anthony Wells of YouGov.com I managed to dissect this, see (2017abc). Some 17 per cent of voters ranked Remain between different versions of Leave, which implies a grand game of guessing what to vote for, and which means that the Referendum failed on its purpose of expression of preferences. The UK Electoral Commission missed this but it does not care about this and is happy to take the legal position. They claim to provide proper information to the general public, but what they regard as “information” is regarded by statistical science as disinformation (but the RSS is silent on this). One is reminded of Byzantium instead of claimed modernity.

The main question is why the UK had the referendum in the first place. In Holland since 1917 there is system of equal proportional representation (EPR) for the House of Commons so that referenda are not required. The UK has a system of district representation (DR) that lacks such proportionality, and that invites the confusion that referenda might be used to find out what the electorate really thinks. The latter is a confusion indeed, since it neglects the important role of bargaining, see (2017c).

This diagnosis set me on the course of investigating why the USA, UK and France have DR and not EPR. My original thought was that a party that won an election would have no reason to change a system that caused its election. This would explain why the USA, UK and France were stuck with DR and did not switch to EPR. Last year I discovered that the true cause is different. My finding for the UK is that there is an amazing blindness in the UK intellectual community. The report in (2018a) causes a chill down the spine. It appears that “political science on electoral systems” is no science yet, but still solidly within the Humanities, and alike astrology, alchemy and homeopathy. The eye-opener is that these academics use the same word “election” for both DR and EPR while they actually have entirely different meanings. In reality only EPR has proper elections fitting of proper democracy. The DR system is a proto-democracy that relies on contests. Political “science” is blind to what this means not only for proper scientific analysis but also for communication with the general public. Voters are disinformed on a grand scale, both in the textbooks in government classes and in public discussion e.g. at “election” nights.

Compare physics that also borrowed words from colloquial English, like “force” and “mass”. Yet in physics these words have recieved precise meaning. In physics, gravity in Holland has the same meaning as gravity in the UK. Political “science” uses colloquial terms like “election” and “democracy” but those meanings are not fixed. An “election” in Holland with EPR is entirely different from an “election” in the UK with DR. Political “science” thus uses terms that confuse both the academics and the public. When historians describe how the West developed into democracy, they occlude the fact that the USA, UK and France are still in a proto-democratic phase.

A first complication is: There appears to be a special role for the UK Electoral Reform Society (ERS) founded in 1884 and originally known as the Proportional Representation Society. Here we find an independent and disinterested group that criticises DR and that claims to further the UK on the historical path towards EPR. However, it appears that ERS wants a transferable vote, while their claim that transferability generates proportionality is simply false. Such distortion contributed to the debacle of the 2011 Referendum on the “alternative vote”, which is a counterproductive construct to start with. When one presents the ERS with this criticism then the reply appears to be disingenuous. Instead of a clear adoption of EPR, either in the Dutch version or like the UK elections for the EU Parliament, with their wealth of experience by actual application, one can only conclude that the ERS is addicted to this notion of a transferable vote, and they want this model at any cost. Psychology might explain how such zealotism may arise but it remains far removed from proper information for the general public.

A second complication is: There appears to exist a confusion w.r.t. the interpretation of Arrow’s Impossibility Theorem on democracy. In this, there is a major role for mathematicians who mainly look at models and who neglect empirical science. This leads too far for this memo, and an overview is given in (2018e).

A third complication is: There is the interference by a grand coalition of statistics and political science (with some ambiguity whether quotation marks should be used) in creating a black hole on democracy and its measurement, see (2018bcd). Political science never managed to find a good measure for the difference between vote shares and seat shares. My proposal is to use the “sine-diagonal inequality / disproportionality” (SDID) measure, that does for democracy what the Richter scale does for earthquakes. Political science has shown less understanding of statistics, or perhaps failed in finding such a measure because statistical science did not develop this theory or did not understand what the political scientists were looking for. This hole has been plugged now, see (2018bcd). Nevertheless, this diagnosis calls for a reorganisation of university courses in statistics and political science.

The enclosed graph highlights the “perfect storm” of blindness of the intellectual community that lurks behind Brexit. The figure is documented in (2018d). The main idea is that statistics and other sciences like physics, biology, psychometrics and econometrics could help “political science on electoral systems” to become a proper science. Then science can provide adequate information to the general public.

A conclusion is: The UK electoral system has “winner take all” district representation (DR) that does not provide for equal proportional representation (EPR) of what voters want. Again the word “representation” means something else for proto-democratic DR versus democratic EPR. My suggestion is that the UK switches to EPR, say adopt the Dutch system of open lists, has new elections, and let the new House discuss Brexit or Bregret again. Bregret is defined by that the House adopted Brexit before and thus might reconsider. It is not unlikely that the EU would allow the UK the time for such a fundamental reconsideration on both electoral system and Brexit.

It remains to be seen whether the UK electorate would want to stick to the current system of DR or rather switch to EPR. The first step is to provide the UK electorate with adequate information. For this, the UK intellectual community must get its act together on what this information would be. A suggestion is to check the analysis that I have provided here.

 

References

Colignatus (2014), “An economic supreme court”, RES Newsletter issue 167, October, pp. 20-21
Colignatus (2017a), “Voting theory and the Brexit referendum question”, RES Newsletter, Issue 177, April, pp. 14-16
Colignatus (2017b), “Great Britain’s June 2017 preferences on Brexit options”, RES Newsletter, Issue 177, October, http://www.res.org.uk/view/art2Oct17Features.html
Colignatus (2017c), “Dealing with Denial: Cause and Cure of Brexit”, https://boycottholland.wordpress.com/2017/12/01/dealing-with-denial-cause-and-cure-of-brexit/
Colignatus (2018a), “One woman, one vote. Though not in the USA, UK and France”, https://mpra.ub.uni-muenchen.de/84482/
Colignatus (2018b), “Comparing votes and seats with cosine, sine and sign, with attention for the slope and enhanced sensitivity to inequality / disproportionality”, https://mpra.ub.uni-muenchen.de/84469/
Colignatus, (2018c), “An overview of the elementary statistics of correlation, R-Squared, cosine, sine, Xur, Yur, and regression through the origin, with application to votes and seats for parliament ”, https://doi.org/10.5281/zenodo.1227328
Colignatus, (2018d), “An overview of the elementary statistics of correlation, R-Squared, cosine, sine, Xur, Yur, and regression through the origin, with application to votes and seats for parliament (sheets)”, Presentation at the annual meeting of Dutch and Flemish political science, Leiden June 7-8, https://zenodo.org/record/1270381
Colignatus, (2018e), “The solution to Arrow’s difficulty in social choice (sheets)”, Second presentation at the annual meeting of Dutch and Flemish political science, Leiden June 7-8, https://zenodo.org/record/1269392

The US National Governors Association (NGA Center for Best Practices) and the Council of Chief State School Officers (CCSSO) are the makers of the US Common Core State Standards (CCSS).

The CCSS refer to the Trends in International Mathematics and Science Study (TIMSS) (wikipedia).

TIMSS is made by the International Association for the Evaluation of Educational Achievement (IEA) (wikipedia). It so happens that IEA has its headquarters in Amsterdam but the link to Holland is only historical.

I am wondering whether CCSS and TIMSS adequately deal with the redesign of mathematics education.

There are conditions under which TIMSS is invalid.

There are conditions under which TIMSS is incomplete.

See my letter to IEA (makers of TIMSS) and NGA Center and CCSSO (users of TIMSS, makers of CCSS).

The dictum is to have one subject per letter. This paradise is no longer possible when time passes and letters and subjects accumulate. Let me take stock of some findings on democracy.

Economic theory needs a stronger defence against unwise application of mathematics. Mathematicians are trained for abstract thought and not for empirical science. Their contribution can wreak havoc, for example in education with real life pupils and students, in finance by neglecting real world risks that contribute to a world crisis, or in voting theory where they don’t understand democracy.

Nowadays, though, I am also wary of students from the Humanities who rely upon legal views (their version of mathematics) instead of empirical understanding.

For the following, distinguish single seat elections (president, prime minister) and multiple seats elections (parliament). There is also a key distinction between Equal Proportional Representation (EPR) with proper elections and District Representation (DR) that has contests rather than proper elections.

Key findings

(1) Montesquieu’s Trias Politica of the separation of powers is failing, and we need the separation of a fourth power, an Economic Supreme Court, based upon science, with a position in the constitution at the same level as the Executive, Legislative and Judiciary. The current setup allows too much room for politicians to manipulate the information for policy making. This need for separation can also be proven logically in a model using stylised facts, see the book DRGTPE. A short discussion on the 2007+ European crisis is here.

(2) Kenneth Arrow in his Impossibility Theorem has a correct deduction (there is an impossibility) but a wrong interpretation. He confuses voting and deciding. For this debunking of Arrow’s Theorem, see Chapter 9.2 of Voting Theory for Democracy (p239-251). Sheets of a presentation in June 2018 are here.

(3) A voting method that many might find interesting is the Borda Fixed Point method. See the counterfactual example of selecting a Prime Minister for Holland.

(4) Political science on electoral systems is no science yet but still locked in the Humanities, and comparable to astrology, alchemy and homeopathy. People in the USA, UK and France still have taxation without representation.

(4a) The key paper is One woman, one vote. Though not in the USA, UK and France.

(4b) A supportive paper develops the SDID distance measure for votes and seats.

(4c) This paper reviews the role of statistics for the latter measure. Sheets of a presentation in June 2018 are here.

(4d) An earlier comparison of Holland and the UK in 2010 (update 2015) contains a major stepping stone, but is not as critical as (4a). This analysis resulted in a short paper for Mathematics Teaching 222 (May 2011) at the time of the UK referendum on Alternative Vote.

Minor results because these lead to dead ends

(5) There are some supplementary findings, that I do not regard as major, but as roads that you might need to walk in order to discover that they do not lead far.

(5a) There are Two conditions for the application of Lorenz curve and Gini coefficient to voting and allocated seats. The Lorenz curve is a neat way to graphically show the disproportionality and inequality of votes and seats. The Gini is its associated measure. However, above measure SDID is to be preferred, since it is symmetric and doesn’t require sorting, has a relation to the R-squared and the Weber-Fechner law.

(5b) We can compare votes and seats but also use a policy distance. A crucial question is who determines the distance between policies ? When we have a distance, how do we process it ? I am not convinced by the method, but a discussion is here.

(5c) The Aitchison geometry might present a challenge to SDID. This paper provides an evaluation and finds this geometry less relevant for votes and seats. Votes and seats satisfy only two of seven criteria for application of the Aitchison distance.

(5d) This paper tries to understand the approach by Nicolaus Tideman and compares it with the distinction between voting and deciding.

(5e) Mathematician Markus Schulze was asked to review VTFD but did not check his draft review with me, which caused needless confusion, see here and here. PM. Schulze now has this 2017 paper, but doesn’t refer to Borda Fixed Point, perhaps thinking that he understands it, but he apparently is not open to the diagnosis that his “review” is no proper review.

Conclusion

For the above, it is pleasant that a distinction can be made between key results and findings about dead ends. I listed my debunking of Arrow’s Theorem as a key result, but it also identifies this theorem as a dead end. Thus, it is also a matter of perspective. When you are at the dead end, and turn around, the whole road is open again.

PM. Earlier weblog entries on democracy are here.

Mathematics concerns patterns and can involve anything, so that we need flexibility in our tools when we do or use mathematics. In the dawn of mankind we used stories. When writing was invented we used pen and paper. It is a revolution for mankind, comparable to the invention of the wheel and the alphabet, that we now can do mathematics using a computer. Many people focus on the computer and would say that it is a computer revolution, but computers might also generate chaos, which shows that the true relevance comes from structured use.

I regard mathematics by computer as a two-sided coin, that involves both human thought (supported by tools) and what technically happens within a computer. The computer language (software) is the interface between the human mind and the hardware with the flow of electrons, photons or whatever (I am no physicist). We might hold that thought is more fundamental, but this is of little consequence, since we still need consistency that 1+1 = 2 in math also is 1+1 = 2 in the computer, and properly interfaced by the language that would have 1+1 = 2 too. The clearest expression of mathematics by computer is in “computer algebra” languages, that understand what this revolution for mankind is about, and which were developed for the explicit support of doing mathematics by computer.

The makers of Mathematica (WRI) might be conceptually moving to regarding computation itself as a more fundamental notion than mathematics or the recognition and handling of patterns. Perhaps in their view there would be no such two-sided coin. The brain might be just computation, the computer would obviously be computation, and the language is only a translator of such computations. The idea that we are mainly interested in the structured products of the brain could be less relevant.

Stephen Wolfram by origin is a physicist and the name “Mathematica” comes from Newton’s book and not from “mathematics” itself, though Newton made that reference. Stephen Wolfram obviously has a long involvement with cellular automata, culminating in his New Kind of Science. Wolfram (2013) distinguishes Mathematica as a computer program from the language that the program uses and is partially written in. Eventually he settled for the term “Wolfram language” for the computer language that he and WRI use, like “English” is the language used by the people in England (codified by their committees on the use of the English language).

My inclination however was to regard “Mathematica” primarily as the name of the language that happened to be evaluated by the program of the same name. I compared Mathematica to Algol and Fortran. I found Wolfram’s Addison-Wesley book title in 1991 & 1998 “Mathematica. A system for doing mathematics by computers” as quite apt. Obviously the system consists of the language and the software that runs it, but the latter might be provided by other providers too, like Fortran has different compilers. Every programmer knows that the devil is in the details, and that a language documentation on paper might not give the full details of actually running the software. Thus when there are not more software providers then it is only accurate to state the the present definition of the language is given precisely by the one program that runs it. This is only practical and not fundamental. In this situation there is no conflict in thinking of “Mathematica as the language of Mathematica“. Thus in my view there is no need to find a new name for the language. I thought that I was using a language but apparently in Wolfram’s recent view the emphasis was on the computer program. I didn’t read Wolfram’s blog in 2013 and otherwise might have given this feedback.

Wolfram (2017) and (2018) uses the terms “computational essay” and “computational thinking” while the latter is used such that he apparently intends this to mean something like (my interpretation): programming in the Wolfram Language, using internet resources, e.g. the cloud and not necessarily the stand-alone version of Mathematica or now also Wolfram Desktop. My impression is that Wolfram indeed emphasizes computation, and that he perhaps also wants to get rid of a popular confusion of the name “Mathematica” with mathematics only. Apparently he doesn’t want to get rid of that name altogether, likely given his involvement in its history and also its fine reputation.

A related website is https://www.computerbasedmath.org (CBM) by Conrad Wolfram. Most likely Conrad adopts Stephen’s view on computation. It might also be that CBM finds the name “Mathematica” disinformative, as educators (i) may be unaware of what this language and program is, (ii) may associate mathematics with pen and paper, and (iii) would pay attention however at the word “computer”. Perhaps CBM also thinks: You better adopt the language of your audience than teach them to understand your terminology on the history of Mathematica.

I am not convinced by these recent developments. I still think: (1) that this is a two-sided coin (but I am no physicist and do no know about electrons and such), (2) that it is advantageous to clarify to the world: (2a) that mathematics can be used for everything, and (2b) that doing mathematics by computer is a revolution for mankind, and (3) that one should beware of people without didactic training who want to ship computer technology into the classroom. My suggestion to Stephen Wolfram remains, as I did before in (2009, 2015a), that he turns WRI into a public utility like those that exist in Holland – while it already has many characteristics of this. It is curious to see the open source initiatives that apparently will not use the language of Mathematica, now by WRI (also) called the Wolfram Language, most likely because of copyright fears even while it is good mathematics.

Apparently there are legal concerns (but I am no lawyer) that issues like 1+1 = 2 or [Pi] are not under copyright, but that choices for software can be. For example the use of h[x] with square brackets rather than parentheses h(x), might be presented to the copyright courts as a copyright issue. This is awkward, because it is good didactics of mathematics to use the square brackets. Not only computers but also kids may get confused by expressions a(2 + b) and f(x + h) – f(x). Let me refer to my suggestion that each nation sets up its own National Center for Mathematics Education. Presently we have a jungle that is no good for WRI, no good for the open source movement (e.g. R or https://www.python.org or http://jupyter.org), and especially no good for the students. Everyone will be served by clear distinctions between (i) what is in the common domain for mathematics and education of mathematics (the language) and (ii) what would be subject to private property laws (programs in that language, interpreters and compilers for the language) (though such could also be placed into the common domain).

PM. This test has been included in the May 23 2018 article on Arithmetic with H = -1. Earlier related blogs are here and here.

Colignatus, Th. (2009, 2015a), Elegance with Substance, (1) website  (2) PDF on Zenodo

Wolfram, S. (1991, 1998), Mathematica. A system for doing mathematics by computer, 2nd edition, Addison-Wesley

Wolfram, S. (2013), What Should We Call the Language of Mathematica?, weblog

Wolfram, S. (2017), What Is a Computational Essay?, weblog

Wolfram, S. (2018), Launching the Wolfram Challenges Site, weblog

For our understanding of history we like to distinguish between structural developments and contingencies.

Examples of structure would be the rise of the world population and Jared Diamond’s Guns, Germs, and Steel. Obviously, various authors have various suggestions for what they consider to be structure, but the lack of consensus generally doesn’t matter as long as the discussion continues, and as long as people are aware that there are different points of view. It is rather tricky to identify structure for the here and now because it might require the perspective of some centuries to arrive at proper evaluation.

There are also major contingent events that shaped developments. The collapse of civilisation in 1177 BC would be a perfect storm. Caesar might not have crossed the Rubicon. His Alea iacta indicates that he took a calculated risk and the outcome might have been different. If the weather had been better then perhaps the Armada had conquered England and saved the world for Catholicism.

Thus we distinguish structure and relevant and irrelevant contingency.

Brexit came with such surprise that we are still discussing how it could have happened. It very much looks like a perfect storm. The 2016 referendum result has many curious aspects. The referendum question itself doesn’t fit the requirements of a scientifically warranted statistical questionnaire – and the British Electoral Commission doesn’t mind. Even in 2017 17% of UK voters put Remain between different options for Leave, and those of them who voted Leave in 2016 might not have voted so if their preferred option might not materialise (see here). Hannes Grassegger & Mikael Krogerus point to media manipulation. Referenda are instruments of populism, and the better model of democracy is representative democracy. Chris Patten rightly remarks that the UK House of Commons had more options than Theresa May suggests:

“The Brexit referendum last June was itself a disaster. A parliamentary democracy should never turn to such populist devices. Even so, May could have reacted to the 52 per cent vote to quit Europe by saying that she would hand the negotiations to a group of ministers who believed in this outcome and then put the result of the talks in due course to parliament and the people. Instead, she turned the whole of her government into a Brexit machine, even though she had always wished to remain in the EU. Her government’s motto is now “Brexit or bust.” Sadly, we will probably get both.”

Structural cause of Brexit

My take of the structural cause of Brexit is clarified by the following table. We distinguish Euro and Non-Euro countries versus the political structures of district representation (DR) and equal or proportional representation (EPR).

District representation (DR) Equal or proportional representation (EPR)
Euro France Holland (natural quota)
Germany (threshold 5%)
Non-Euro UK (Brexit) Sweden (threshold 4%)
Norway (non-EU, threshold 4%)

Update 2018-02-27: On the distinction between DR and EPR, there are: (1) this short overview of elementary statistics with an application to votes and seats, (2) a deconstruction of the disarray in the “political science on electoral systems” (1W1V), and (3) details on the suggestion for an inequality or disproportionality measure (SDID).

In the special Brexit edition of BJPIR, Helen Thompson discusses inevitability and contingency, and concludes that the position of the UK as a non-Euro country in a predominantly Eurozone EU became politically untenable.

  • For the voters in the UK, migration was a major issue. The world financial crisis of 2007+ and the contractionary policies of the Eurozone turned the UK into a “job provider of last resort”.
  • For the political elite, the spectre of the Euro doomed large. Given the theory of the optimal currency area, the Eurozone must further integrate or break up. The UK didn’t want to join the Euro and thus found itself at the fringe of the EU, in an increasing number of issues. With the increasing loss of power and influence on developments, more and more politicians saw less and less reason to participate.

Thompson regards the economic angle as a sufficient structural cause. My take is that it is only necessary, and that another necessary element is the form of parliamentarian representation. In my recent paper One woman, one vote. Though not in the USA, UK and France, with the focus on this parlementarian dimension, I forward the diagnosis that the UK political system is the main cause. Brexit is not the proof of a successful UK political system but proof of its failure.

  • The UK has district representation (DR). UKIP got 12.5% of the votes but only 1 seat in a house of 650 seats. David Cameron saw that crucial seats of his Conservatives were being challenged by UKIP. Such a threat may be amplified under DR. This explains Cameron’s political ploy to call a referendum.
  • If the UK had had equal or proportional representation (EPR), the UKIP protest vote could have been contained, and the UK would have had more scope to follow the example of Sweden (rather than Norway). Obviously, the elephant in the room of the optimal currency area for the Euro would not be resolved by this, but there would have been more time to find solutions. For example, the UK would have had a stronger position to criticise the wage moderation policies in Germany and Holland.
The structural cause of disinformation about representation

The 2007+ financial crisis highlighted irresponsible herd behaviour in economic science. Brexit highlights irresponsible herd behaviour in political science. Said paper One woman, one vote. Though not in the USA, UK and France (1W1V) shows that political science on electoral systems (on that topic specifically) is still pre-science, comparable to homeopathy, astrology and alchemy. Thus the UK finds itself in the dismal spot of being disinformed about democracy for decades.

The paper runs through the nooks and crannies of confusion and bias. At various points I was surprised by the subtleties of the particular madness. The paper is rather long but this has a clear explanation. When an argument has 100 aspects, and people understand 99% correctly and 1% wrongly, but everyone another 1%, in continuous fashion, then you really want the full picture if you want that all understand it.

But let me alert you to some points.

(1) The paper focuses on Carey & Hix (2011) on an “electoral sweet spot” of 3-8 seats per district. Particular to C&H is that they confuse “most frequent of good” with “the best“. The district magnitude of 3-8 seats appears most frequent in cases that satisfy their criteria for being good, and they turn this into the best. Since such DR would be best, say goodbye to EPR. But it is a confusion.

(2) They use fuzzy words like vote and election. But the words mean different things in DR or EPR. In DR votes are obliterated that EPR translates into seats. Using the same words for different systems, C&H suggest treatment on a par while there are strict logical differences. The Universal Declaration of Human Rights only fits with EPR. Science would use strict distinctions, like “vote in DR” and “vote in EPR”. Political science is still too close to colloquial language, and thus prone to confusion. Obviously I agree that it is difficult to define democracy, and that there are various systems, each with a historical explanation. But science requires clear terms. (See this Varieties of Democracy project, and check that they still have to do a lot too.)

(3) There is a statistical relationship between a measure of disproportionality (EGID) and a measure of the concentrated number of parties (CNP). C&H interprete the first as “interest-representation” and the latter as “accountability”. An interpretation is something else than a model. Using the statistical regularity, they claim to have found a trade-off relation between interest-representation and accountability. Instead, the scientific approach would be to try explain the statistical regularity for what it is. The suggested interpretation is shaky at best. One cannot use a statistical regularity as an argument on content and political principle (like One woman, one vote).

(4) They present a mantra, and repeat it, that there would be a trade-off between interest-representation and accountability. The best point [confusion] would be achieved at a district magnitude of 3-8 seats per district. However, they do not present a proper model and measure for accountability. My paper presents such a model, and shows that the mantra is false. Not DR but EPR is most accountable. EPR is obviously most interest-representative, so that there is no trade-off. Thus the C&H paper fails in the scientific method of modeling and measuring. It only has the method of repeating tradition and a mantra, with some magic of using interpretations. (Section 3.6 of 1W1V should start opening eyes of political scientists on electoral systems.)

(5) The C&H paper is the top of a line of research in “political science on electoral systems”. This paper fails and thus the whole line fails. Section 4.5 of 1W1V shows confusion and bias in general in political science on electoral systems, and the C&H paper is no exception to this.

The cure of Brexit

The cure of Brexit might well be that it just happens, and that we must learn to live with it. The EU lives with Norway while NATO has its Arctic training there.

Seen from the angle of the cause via the political structure, it may also be suggested that both France and the UK switch from DR to EPR, and that the newly elected UK House of Commons re-evaluates Brexit or Bregret. This switch may well cause the break-up of the parties of the Conservatives and Labour into Remain or Leave parties, but such would be the consequence of democracy and thus be fine by itself. We would no longer have Theresa May who was for Remain leading the Leavers and Jeremy Corbyn who was for Leave leading the Remainers. (For an indication, see here.) The other EU member states tend to stick to the Brexit deadline of March 29 2019, but when they observe the cause for Brexit and a new objective in the UK to deal with this (fateful) cause by switching to EPR, then this deadline might be shifted to allow the UK to make up its mind in a proper way.

Obviously, a UK switch to EPR is advisable in its own right, see said paper. It would also allow the new UK House of Commons to still adopt Brexit. The advantage of such an approach and decision would be that it would have the democratic legitimacy that is lacking now.

The relevant contingency of the Sovereignty Bill

Thompson’s article surprised me by her discussion of the 2010 UK Sovereignty Bill (that calls itself an Act). She calls it a “referendum lock”, and indeed it is. The Bill / Act states:

“2 Treaties. No Minister of the Crown shall sign, ratify or implement any treaty or law, whether by virtue of the prerogative powers of the Crown or under any statutory authority, which — (a) is inconsistent with this Act; or (b) increases the functions of the European Union affecting the United Kingdom without requiring it to be approved in a referendum of the electorate in the United Kingdom.”

The approach is comparable to the one in Ireland, in which EU treaties are subject to referenda too. In Holland, only changes in the constitution are subject to new elections and affirmation by the newly elected parliament, while treaties are exempt from this – and this is how the EU constitution of 2005 got rejected in a referendum but the Lisbon treaty got accepted in Dutch parliament. Currently a state commission is investigating the Dutch parliamentary system.

Thompson explains that the UK referendum lock had the perverse effect that EU leaders started to avoid the instrument of a treaty and started to use other ways to enact policies. For EU-minded Ireland, the instrument of a referendum was acceptable but for EU-skeptic UK the instrument was a poison pill. Why put much effort in negotiating a treaty if it could be rejected by the UK circus (partly created by its system of DR) ?

Thompson explains that while the referendum lock had been intended to enhance the UK position as a non-euro country w.r.t. eurozone UK, in effect it weakened Cameron’s position. The world noticed this and this weak position was fuel for the Brexiteers.

The relevant contingency of Thatcher’s policies

Brexit is mostly caused in the UK itself. Thompson doesn’t call attention to these relevant contingencies:

  • Margaret Thatcher started as pro-EU and even partook in the abolition of unanimity and the switch to qualified majority rule. My view is that it would have been wiser to stick to unanimity and be smarter in handling different speeds.
  • Secondly, Thatcher supported the neoliberal approach in economics that contributed to austerity and the deterioration of British industry that British voters blame the EU for. There was an obvious need for redress of earlier vulgar-Keynesian errors but there is no need to overdo it. My advice to the UK is to adopt EPR and see what can be learned from Holland and Sweden.
  • Thompson refers to her own 1996 book on the UK and ERM but doesn’t mention Bernard Connolly, his text The rotten heart of Europe and his dismissal from the EU Commission in 1995. At that time John Major had become prime minister and he did not defend Connolly’s position at the EU Commission. A country that is so easy on civil rights and free speech deserves the state that the UK is in. Surely the EU courts allowed the dismissal but this only means that one should look for better employment safeguards for critical views. Who wants to combine independent scientific advice and policy making, arrives at the notion of an  Economic Supreme Court, see below.
The relevant contingency of migration

I am reminded of the year 1988 at the Dutch Central Planning Bureau (CPB) when we looked at the Cecchini report. One criticism was that the report was too optimistic about productivity growth and less realistic on the costs of displaced workers. An observation by myself, though not further developed, was that, with more job mobility, people might prefer a single language barrier to a double one. People from the UK might move easier to Northern European countries that speak English well. People from the rest of Europe who have learned some English might prefer to go to the UK, to avoid having to deal with two other languages. I don’t know much about migration and I haven’t checked whether the UK has a higher share of it or not, and whether this language effect really matters. Given the role in the discussion it obviously would be a relevant contingency. Perhaps the UK and Ireland might claim a special position because of the language effect, and this might encourage other countries to switch to English too. But I haven’t looked into this.

The other elephant in the room

The other elephant in the room is my own analysis in political economy. It provides an amendment to Thompson’s analysis.

  • DRGTPE provides for a resolution of the Great Stagflation that we are in.
  • CSBH provides a supplement for the 2007+ crisis situation.
  • The paper Money as gold versus money as water (MGMW) provides an amendment to the theory of the optimal currency area: when each nation has its own Economic Supreme Court then countries might achieve the kind of co-ordination that is required. This is still a hypothesis but the EU has the option of integration, break up, or try such rational hypotheses. (The Van Rompuy roadmap might speed up integration too much with risk of a break-up.)

The main idea in DRGTPE was available in 1990 with the collection of background papers in 1992 (published by Guido den Broeder of Magnana Mu). Thus the EU might have had a different approach to EMU. The later edition of DRGTPE contains a warning about financial risk that materialised in 2007+. CSBH and MGMW provide a solution approach for the current problems.

If the EU would adopt such policies then there would be much less migration, since people would tend to prefer to remain at home (which is why I regard migration as a secondary issue and less in need for studying).

If the EU and UK would adopt such policies then there might still be Brexit or Bregret. Thus UK politicians might still prefer what they are now trying to find out what they prefer.

Conclusion

My impression is that the above gives a clear structural explanation for the UK decision for Brexit and an indication what contingent events were relevant. Knowing what caused helps to identify a cure. It is remarkable how large the role of denial in all of this is. Perhaps this story about the polar bear provides a way to deal with this huge denial (as polar elephants a.k.a. mammoths are already extinct).

Karl Pearson (1857-1936) is one of the founders of modern statistics, see this discussion by Stephen Stigler 2008 (and see Stigler’s The Seven Pillars of Statistical Wisdom 2016).

I now want to focus on Pearson’s 1897 paper Mathematical Contributions to the Theory of Evolution.–On a Form of Spurious Correlation Which May Arise When Indices Are Used in the Measurement of Organs.

The main theme is that if you use the wrong model then the correlations for that model will be spurious compared to the true model. Thus, Pearson goes at length to create a wrong model, and compares this to what he claims is the true model. It might be though that he still didn’t develop the true model. Apart from this complexity, it is only admirable that he points to the notion of such spurious correlation in itself.

One example in Pearson’s paper is the measurement of skulls in Bavaria (p495). The issue concerns compositional data, i.e. data vectors that add up to a given total, say 100%. The former entry on this weblog presented the inequality / disproportionality measure SDID for votes and seats. These become compositional data when we divide them by their sum totals, so that we compare 100% of the votes with 100% of the seats.

Pearson’s analysis got a sequel in the Aitchison geometry, see this historical exposition by Vera Pawlowsky-Glahn and Juan José Egozcue, The closure problem: one hundred years of debate. Early on, I was and still am a fan of the Aitchison & Brown book on the lognormal distribution but I have my doubts about the need of this particular geometry for compositional data. In itself the Aitchison geometry is a contribution, with a vector space, norm and inner product. When we transform the data to logarithms, then multiplication becomes addition, and powers become scalars, so that we can imagine such a vector space, yet, the amazing finding is that rebasing to 1 or 100% can be maintained. It is called “closure” when a vector is rebased to a constant sum. What, however, is the added value of using this geometry ?

It may well be that different fields of application still remain different on content, so that when they generate compositional data, then these data are only similar in form, while we should be careful in using the same techniques only because of that similar form. We must also distinguish:

  • Problems for compositional data that can be handled by both Sine / Cosine and the Aitchison geometry, but for which Sine and Cosine are simpler.
  • Problems for compositional data that can only be handled by the Aitchison geometry.

An example of the latter might be the paper by Javier Palarea-Albaladejo, Josep Antoni Martın-Fernandez and Jesus A. Soto (2012) in which they compare the compositions of milk of different mammals. I find this difficult to judge on content since I am no biologist. See the addendum below on the distance function.

In a fine overview by sheets, Pawlowsky-Glahn, Egozcue & Meziat 2007 present the following example, adapted from Aitchison. They compare two sets of soil samples, of which one sample is contaminated by water. If you want to spot the problem with this analysis yourself, take a try, and otherwise read on.

When the water content in the sample of A is dropped, then the test scores are rebased to the total of 100% for B again. E.g. for the 60% water in sample 1, this becomes:

{0.1, 0.2, 0.1} / (0.1 + 0.2 + 0.1) = {0.25, 0.5, 0.25}

PM. A more complex example with simulation data is by David Lovell.

Reproduction of this example

It is useful to first reproduce the example so that we can later adapt it.

In Wolfram Alpha, we can reproduce the outcome as follows.

For A, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; Correlation[mat1] // Chop // MatrixForm.

For B, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Correlation[mat2] // Chop // MatrixForm.

The confusion about the correlation

In the former weblog entry, we had SDID[v, s] for the votes v and seats s. In this way of thinking, we would reason differently. We would compare (correlate) rows and not columns.

There is also a difference that correlation uses centered data while Sine and Cosine use original or non-centered data. Perhaps this contributed to Pearson’s view.

One possibility is that we compare sample 1 according to A with sample 1 according to B, as SDID[1A*, 1B]. Since the measures of A also contain water, we must drop the water content and create A*. The assumption is that and are independent measurements, and that we want to see whether they generate the same result. When the measurements are not affected by the content of water, then we would find zero inequality / disproportionality. However, Pawlowsky et al. do not state the problem as such.

The other possibility is that we would compare SDID[sample i, sample j].

Instead of using SDID for inequality / disproportionality, let us now use the cosine as a measure for similarity.

For A, the input code is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; Outer[cos, mat1, mat1, 1] // Chop // MatrixForm.

Since the water content is not the same in all samples, above scores will be off. To see whether these similarities are sensitive to the contamination by the water content, we look at the samples according to B.

The input code for Wolfram Alpha is: mat1 = {{0.1, 0.2, 0.1, 0.6}, {0.2, 0.1, 0.2, 0.5}, {0.3 , .3, 0.1, 0.3}}; cos[x__] := 1 – CosineDistance[x]; droplast[x_List?VectorQ] := Module[{a}, a = Drop[x, -1]; a / (Plus @@ a)]; mat2 = droplast /@ mat1; Outer[cos, mat2, mat2, 1] // Chop // MatrixForm.

Since the water content differed so much per sample, and apparently is not considered to be relevant for the shares of the other components, the latter matrix of similarities is most relevant.

If we know that the samples are from the same soil, then this would give an indication of sample variability. Conversely, we might have information about the dispersion of samples, and perhaps we might determine whether the samples are from the same soil.

Obviously, one must have studied soil samples to say something on content. The above is only a mathematical exercise. This only highlights the non-transposed case (rows) versus the transposed case (columns).

Evaluation

Reading the Pearson 1897 paper shows that he indeed looks at the issue from the angle of the columns, and that he considers calibration of measurements by switching to relative data. He gives various examples, but let me show the case of skull measurement, that may still be a challenge:

Pearson presents two correlation coefficients for B / L with H / L. One based upon the standard definition (that allows for correlations between the levels), and one baptised “spurious”, based upon the assumption of independent distributions (and thus zero correlations for the levels). Subsequently he throws doubt on the standard correlation because of the high value of the spurious correlation.

One must be a biologist or even a skull-specialist to determine whether this is a useful approach. If the true model would use relative data with zero correlations, what is the value of the assumptions of zero or nonzero correlations for the absolute values ? What is useful depends upon the research question too. We can calculate all kinds of statistics, but what decision is intended ?

It is undoubtedly a contribution by Pearson that looking at phenomena in this manner can generate what he calls “spurious correlation”. Whatever the model, it is an insight that using the wrong model can create spurious correlation and a false sense of achievement. I would feel more comfortable though when Pearson had also mentioned the non-transposed case, which I would tend to regard as the proper model, i.e. comparing skulls rather than correlating categories on skulls. Yet he doesn’t mention it.

Apparently the Aitchison geometry provides a solution to Pearson’s approach, thus still looking at transposed (column) data. This causes the same discomfort.

Pro memori

The above uses soil and skulls, which are not my expertise. I am more comfortable with votes and seats, or budget shares in economics (e.g. in the Somermeyer model or the indirect addilog demand system, Barten, De Boer).

Conclusion

Pearson was not confused on what he defined as spurious correlation. He might have been confused about the proper way to deal with compositional data, namely looking at columns rather than rows. This however also depends upon the field of interest and the research question. Perhaps a historian can determine whether Pearson also looked at compositional data from rows rather than columns.

Addendum November 23 2017

For geological data, Watson & Philip (1989) already discussed the angular distance. Martin-Fernandez, Barcelo-Vidal, Pawlowsky-Glahn (2000), “Measures of differences for compositional data and hierarchical clustering methods“, discuss distance measures. They also mention the angle between two vectors, found via arccos[cos[v, s]], for votes v and seats s. It is mentioned in the 2nd row of their Table 1. The vectors can also normalised to the unit simplex as w = v / Sum[v] and z = s / Sum[s], though cos is insensitive with cos[w, z] = cos[v, s].

In sum, the angular distance, or the use of the sine as a distance measure and the cosine as a similarity measure, satisfy the Aitchison criteria of invariance to scale and permutation, but do not satisfy subcompositional dominance and invariance to translation (perturbation).

This discussion makes me wonder whether there are still key differences between compositional data in terms of concepts. The compositional form should not distract us from the content. For a Euclidean norm, a translation leaves a distance unaffected, as Norm[x – y] = Norm[(x +t) – (y + t)}. This property can by copied for logratio data. However, for votes and seats, it is not clear why a (per party different) percentage change vector should leave the distance unaffected (as happens in logratio distance).

An election only gives votes and seats. Thus there is no larger matrix of data. Comparison with other times and nations has limited meaning. Thus there may be no need for the full Aitchison geometry.

At this moment, I can only conclude that Sine (distance) and Cosine (similarity) are better for votes and seats than what political scientists have been using till now. It remains to be seen for votes and seats whether the logratio approach would be better than the angular distance and the use of Sine and Cosine.