Tag Archives: Definition & Reality methodology

Listening to Theodorakis & Ritsos: Lianotragouda

There is a General Theory of Knowledge (GTOK) implicit in former weblog entries. It can better be made explicit. Let me first draw the diagram and then discuss it. Relevant weblogs are:

A General Theory of Knowledge (GTOK)

A General Theory of Knowledge (GTOK)

The diagram with above weblog entries is rather self-explanatory.

  • What I may need to explain as an author is how this relates to my own work.
  • A nice introduction to epistemology, at the level of the international baccalaureate (IB) programme is the book by Richard van de Lagemaat (CUP, now a new 2015 edition).
  • A general principle is that philosophy should use mathematics education as its empirical field of reference. When philosophy hangs in the air then it is at risk of getting lost. The education of mathematics has adequate challenge for dealing with abstract notions.

Some main steps in the diagram are:

  1. Jean Piaget introduced stages of development. Epistemology tends to focus on the last stage, with a fully developed rational being who wonders what can be known and how this can be achieved. It makes sense to distinguish stages in such questions however. Pierre van Hiele removed Piaget’s dependence of stages upon age, and turned the issue into a logical framework for epistemology. With the Definition & Reality methodology this framework is also empirically relevant. This is also very useful for the link of philosophy to education. See Pierre van Hiele and epistemology.
  2. Karl Popper turned Otto Selz’s methodology for psychology into a philosophy of science in general. This uses falsifiability as a demarcation between science and non-science. Since the Anglo-saxon world tends to distinguish science and the humanities (humaniora), the general term “theory of knowledge” (epistemology) will do.
  3. Selz inspired Adriaan de Groot to create his experiments with chess masters. Later De Groot continued in methodology, and it seems that he is the one who introduced the empirical cycle. His book Methodologie ends in depressing awareness that science cannot establish truth as in mathematics. Thus De Groot advances the uplifting Forum Theory, that focuses on the rules of conduct within the scientific community. While we may not discover the real truth we still can ask why we should trust these guys and gals.
  4. De Groot and Van Hiele were also inspired by their UvA math teacher Gerrit Mannoury (1867–1956). See this project about Mannoury and significa.
  5. The dashed arrow from Van Hiele to De Groot is the unfortunate failed transfer of the theory of levels of insight. De Groot refers to the thesis but missed this notion, see this discussion.
  6. My book A Logic of Exceptions (ALOE) (1981, 2007, 2011) is already deep into methodology. ALOE looks into the logical paradoxes and suggests that empirical sense may help to get rid of mathematical nonsense. There is a distinction between Gödel’s theorems and the interpretation that he gave to them. For the issue of volition, determinism and chance there is no experiment that allows to distinguish what is empirically the case. (I haven’t yet looked at the interpretation of the recent experiment with Bell’s equation at TU Delft, see the websites by Ronald Hanson and Richard Gill.)
  7. The abbreviation DRGTPE stands for the book Definition & Reality in the General Theory of Political Economy. This 2000, 2005, 2011 book had a precursor already called Background Papers to DRGTPE that collected papers from 1989-1992. This essentially gave the framework for political economy, in both mathematical model and empirical methodology. The 1994 book Trias Politica & Centraal Planbureau (TP & CPB) (in Dutch) referred to De Groot’s Forum Theory to clinch the argument for an Economic Supreme Court (ESC). Subsequently, DRGTPE 2000 contains a constitutional amendment how the ESC should satisfy such Forum rules.
  8. The news in November 2015 is that I have grown more aware of the importance of Forum Theory for the selection of definitions for applications. This element is implicit in the earlier development but it is useful to state it explicitly, given the importance of the role of definitions. Research groups might be characterised by the definitions that they select. It can depend upon the quality of the rules how flexible research groups are with experiments and adverse information.

Thus, to restate in text what is depicted in the last box in the diagram: This 2015 GTOK has the standard logic (with ALOE), methodology (with Forum Theory), and epistemology, and has more awareness of:

  • levels of insight or understanding
  • Definition & Reality methodology
  • Forum Theory is especially required for the application of definitions.

Some applications of this GTOK are:

(1) My forecast in 1990 (CPB memo 90-III-38) was that unemployment would continue to be high unless Parliament would redesign both the structure of policy making and some policies and markets. I repeated this forecast in 1992, 1994, 2000 extending with other risks like on environment and financial markets, and the condition of the Economic Supreme Court. In the period 1990-2007 Holland seemed to have a lower level of unemployment, which might be a cause for people not paying attention to the analysis. This lower level wasn’t achieved by better policies but by welfare payments (financed by natural gas) and by exporting unemployment by means of maintaining low wages (beggar thy neighbour). The 2007+ crisis and return to higher unemployment confirms my analysis. Though a major element relies on definitions, the forecast as a whole still was falsifiable. Of course the forecast was vague, and not specified with the year 2007, but we are dealing with structure. This also explains why I emphasize that Dirk Bezemer misinforms Sweden and Dutch Parliament: because he keeps silent about the theoretical confirmation given by the empirical experiment of 1990-2007.

(2) The scheme allows us to deal with the confusions by Stellan Ohlsson (abstract to concrete) and Ben Wilbrink (Van Hiele’s theory of levels wouldn’t be empirical).

(3) The scheme allows us to deal with the problem of universals. Van Hiele “demonstrated” the general applicability of the theory of levels by using the example of geometry. (And geometry uses demonstration as a method of proof too.) He mentioned that the theory had general applicability and mentioned chemistry and didactics as other examples, without working out those examples. Freudenthal neglected Van Hiele’s general claim, put him into the box of “geometry only”, and claimed that he, Freudenthal himself, had shown the applicability to mathematics in general. (See here.) Of course, Freudenthal also had the problem that a universal proof is impossible, since you would need to check each field of knowledge. However, now with the definition  reality methodology, we can take the levels of insight as a matter of definition. Like the law of conservation of energy defines what we regard as “energy”. The problem shifts to application. For this, there is Forum theory.


Listening to Andriopoulos & Odysseas Elutis (1984): Prosanatolismoi


Let us discuss Gerald Goldin (2003), Developing complex understandings: On the relation of mathematics education research to mathematics. I presume that the reader has checked earlier discussions on Goldin (1992), on epistemology and on Stellan Ohlsson.

The paper’s abstract is:

Goldin (2003), p171

Goldin (2003), p171

It took me a while to come to grips with this paper. Suddenly it dawned on me that the English speaking world, including Goldin, makes a distinction between science and the humanities. This is what C.P. Snow (1905-1980) called The two cultures.

For Dutch readers these categories are crooked.

  • When Goldin opposes mathematics education research (MER), which in the English world belongs to the humanities, to science including mathematics then this is the distinction between science and the humanities. But for Dutch it sounds very strange to suggest that MER would be non-science.
  • Dutch has the single word wetenschap. How can Goldin oppose things that are the same (learning) ? How can he lump together things that are different (science and mathematics) ?

Dutch convention categorises the humanities as alpha (α), science and mathematics as beta (β) and the mixture as gamma (γ): those deal with alpha subjects but use beta methods. MER would be gamma.

I am not too happy with the Dutch categories since they don’t account for the separate position of philosophy and mathematics. The better distinctions are in the next table.

Categories for general science (science and the humanities)

Categories for general science (science and the humanities)

Above table is intended to categorise whole disciplines, like physics and economics. But we can also look at sub-areas within a discipline. Since both philosophy and mathematics can run astray without some link to the external world, my suggestion is that they both take mathematics education research as some anchor to reality (but they remain what they are when they refuse to do so). They might take an example of history writing, in which most history writing uses non-experimental methods (see the history on Pierre van Hiele) but some historians will rely on the experimental sciences to recover data from the past.

We are now ready to look at the paper.

Below we will see that Goldin opposes α + Φ versus β = δ + μ, forgetting about γ, while my analysis in Elegance with Substance (EWS) (2009, 2015) (pdf online) diagnoses the problem as α + μ versus γ, while δ + Φ have run away and no longer want to take part in cleaning up the mess. Professor Hung-Hsi Wu of Berkeley calls for help by research mathematicians μ to clean up the mess in ME and MER, but in my analysis we need help from engineers and other researchers in the empirical sciences δ + γ, see here.

Goldin 2003 on the decade since 1992: integrity for the disciplines

Goldin’s paper discusses his background, and he seems very well placed to discuss mathematics, ME and MER. Goldin sees a math war and tries to bring calm by increasing complexity. His article is complex itself so that those who pass the test of reading it will understand enough of the various sides of the discussion and be less likely to vilify the other side.

Goldin’s position is that discussants on MER must respect what other discussants on MER are doing and good at. Scientific integrity tends to focus on ethical behaviour of the individual but Goldin widens this to whole disciplines. Scientists must respect the humanities. The humanities must respect science. Otherwise there is no communication and no progress.

Goldin (1992) looked back at the New Math in the 1960s and behaviourism in the 1970s. When those ‘isms’ failed to produce improvement in mathematics education, the educational departments in the humanities grabbed the opportunity to claim their way to success. Goldin would agree partly, since he in 1992 also opposed the New Math and behaviourism. The humanities however created their own ‘isms’. We can now better understand Goldin’s position w.r.t. the decade 1992-2003.

Goldin (2003), p177

Goldin (2003), p177

A key observation is that Pierre van Hiele (1909-2010) is missing in this list and that Hans Freudenthal (1905-1990) committed fraud w.r.t. the work by Van Hiele: so that Goldin has a somewhat rosy view about the “without the far-reaching dismissals, oversimplifications, and ideologies”. The reference to Leen Streefland (1998†) may highlight the ‘ism’.

A Pro Memory point is that David Tall in 2002 apparently misunderstood the Van Hiele theory, as applying only to geometry and not to epistemology in general. This doesn’t seem to be due to ideology on Tall’s part, but there seems to have been some influence of Freudenthal in the misrepresentation of Van Hiele’s work. See my paper on getting the facts right.

An example with Leen Streefland (1998†)

I have not studied Streefland’s work any deeper than the following internet links just now. Those links fit the diagnosis of sectarian behaviour of Freudenthal’s “realistic mathematics education” (RME), and thus I see no reason yet to read more. Streefland belonged to the Freudenthal sect, see this ESM 2003 issue. Pierre van Hiele suggested in 1973 to look into the abolition of fractions, but Streefland (1991) perseveres with a book on “realistic education” on fractions.

See my 2015 book, pdf online, A child wants nice and no mean numbers, also commenting on the US Common Core program and professor Hung-Hsi Wu on fractions. Professor Wu does not belong to the RME sect but his traditional answer on fractions suffers from the intellectual burying of Van Hiele, which the RME sect so effectively achieved. The ‘isms’ are not without cost.

The strategy by Hans Freudenthal and his Utrecht sect – and these are adults who know what they are doing – is to absorb elements of Van Hiele’s work, but misrepresent it to fit their own ideology – which change does not diminish the intellectual theft. They achieve two effects: (i) for an innocent audience they ride the wave of the success by Van Hiele that they are jealous about, and (ii) they exclude Van Hiele himself from the discussion since “they tell it better” – and thus Van Hiele’s protest that his work is abused will not be heard. After all, Freudenthal was a professor in Utrecht with his own Ph.D. students who later became professors, and Van Hiele remained a mere mathematics teacher doing his writings in the weekend.

In this book on fractions, Streefland (1991) p2 states the following. We can excuse authors for the uncreative use of the word “level” that pops up everywhere. The true problem lies in the ideological following of Treffers (1987) and the neglecting in 1991 of Van Hiele’s own work (not only on fractions of 1973).

Streefland (1991), p2

Streefland (1991), p2

This closes the circle: (a) Treffers (1987)’s misrepresentation of Van Hiele’s work is not only in Streefland (1991) but (b) was also copied in the 1993 MORE study, (c) critically discussed by Ben Wilbrink, here, (d) which alerted me to Wilbrink’s misrepresentation of Van Hiele. Wilbrink namely follows the RME abuse, and he also tends to include Van Hiele in the RME sect instead of saving him from it.

Thus we are back into the Dutch math war swamp, with on one side the RME sect and on the other side Jan van de Craats and others who try to save “traditional mathematics education sanity” alongside psychologist Wilbrink with his misapprehension of empirical science and Van Hiele. My position is that of Sherlock Holmes observing it all from the high ground aside.

Traditional mathematics is crooked as well. It e.g. involves torture of kids by fractions. There is every reason to desire change. It doesn’t help when mathematicians, who don’t have empirical training, team up with the humanities who don’t have empirical training either (i.e. α + μ).

Popper’s falsification

One reason why the humanities might be disrespectful of science has to do with Karl Popper’s demarcation theory to use falsification to distinguish science from non-science:

Goldin (2003), p178

Goldin (2003), p178

Goldin reminds us that the humanities are non-science, as seen from science and its experimental method. The humanities should heed the risk of turning this property into a claimed superiority.

The humanities seem to have learned that they should not claim higher wisdom, which they and only they can discover by reading old documents and watching plays by Aristophanes and Shakespeare and have reception parties afterwards to discuss the faculty gossip. But the humanities might still take the Humean skeptic position, and make fun of physics who can put electrodes upon skulls and in that manner likely will never be able to create the insights that a study of the humanities can generate (though they might actually prove some of the gossip).

Goldin’s argument: Physics can be skeptic too. Save those skeptic arguments for your autobiography, for they contribute nothing to the discussion.

My warning: Don’t make too much of falsification. See the discussion on epistemology and the definition & reality methodology.  Above δ and γ sciences rely for the empirical realm upon definitions, and a mathematician μ might well hold that definitions are non-experimental.

David Hume and Ernst von Glasersfeld

Reading a bit more about Ernst von Glasersfeld (1917-2010) was long upon my to-do-list, and Goldin’s article finally caused me to do so. Advisable is his own article Thirty Years Radical Constructivism, Constructivist Foundations 2005, vol. 1, no. 1, p9-12. It is very useful to see Von Glasersfeld’s background in mathematics (not completed because of WW 2), linguistics and cybernetics: γ rather than α. For methodological justification he might be forced to do some philosophy, but he rejects doing that.

Von Glasersfeld (1995) Radical Constructivism at ERIC is too much for now, though. I checked that he indeed discusses Hume, and also mentions the “problem” of induction (see my discussion of epistemology). Von Glasersfeld holds that the issue is not philosophy but finding mechanisms of cognition.

Comment 1: Reuben Hersh (2008) Skeptical Mathematics? Constructivist Foundations 3(2): 72, suggests that “radical constructivism” would be Humean skepticism, and I tend to agree.

Comment 2: Being a Humean skeptic is agreeable too. This (wiki-) quote by Von Glasersfeld seems accurate:

“Once knowing is no longer understood as the search for an iconic representation of ontological reality but, instead, as a search for fitting ways of behaving and thinking, the traditional problem disappears. Knowledge can now be seen as something which the organism builds up in the attempt to order the as such amorphous flow of experience by establishing repeatable experiences and relatively reliable relations between them. The possibilities of constructing such an order are determined and perpetually constrained by the preceding steps in the construction. That means that the “real” world manifests itself exclusively there where our constructions break down. But since we can describe and explain these breakdowns only in the very concepts that we have used to build the failing structures, this process can never yield a picture of a world that we could hold responsible for their failure.”

It is hard to disagree, except when you want to resort to Wigner’s magic again (see Appendix 2). But it doesn’t tell us how to design a course so that Johnny can learn arithmetic. Or how to abolish fractions.

Comment 3: Von Glasersfeld refers to Jean Piaget. Pierre van Hiele developed his theory of levels of insight, starting from Piaget as well, but eventually rejecting Piaget’s age-dependency and choosing for the logical structure that generates a general theory for epistemology.

It is a question what the contacts between Von Glasersfeld and Van Hiele were, and whether Hans Freudenthal was an interfering factor again. We find Von Glasersfeld (ed) (1991), Radical Constructivism in Mathematics Education, Kluwer, that contains a chapter by Jan van den Brink, since 1971 a member of Freudenthal’s sect in Utrecht. Searching the book generate 0 references to “Hiele”. The RME wiki on RME refers to Von Glasersfeld’s book but not to Van Hiele, even though we saw above that Streefland refers to Treffers who considered the Van Hiele levels a “pillar” of RME. Not referring saves the effort of thinking up a lie.

It is a question how the departments on education at the humanities were influenced by RME and Von Glasersfeld and others, and how they got so entangled that Goldin seems to tend to refer to them as one side of the equation (or rather imbalance). It is no key question, but something for historians of MER to be aware of (see the handbook on history of MER).

Comment 4: I started getting lost on what makes “constructivism” so special that it must be mentioned. Originally I knew about constructivism as an approach in the foundations of mathematics, as distinguished from formalism and platonism. My book Foundations of Mathematics. A Neoclassical Approach to Infinity (FMNAI) (2015) creates a ladder of degrees of constructivism (avoiding “levels”), in which the highest degree allows non-constructivist methods. When people use different approaches we should at least describe what they are doing.

But now there are all kinds of “constructivism” in education, psychology and philosophy, without authors taking the time to shortly explain what the non-constructivist opposition would entail. Fortunately, there is wikipedia that might help or contribute to confusion, here with disambiguation. and here with the general denominator in epistemology. The opposite of constructivism would be that people could know objective reality, by magic, and I wonder whether that is so useful an idea. My impression is that there is more to it. Thus authors should still specify. (And then I would not have time to read it.)

Ben Wilbrink is horribly erroneous about Pierre van Hiele and in breach of scientific integrity for not looking into it  to correct his misrepresentation, and Wilbrink can fulminate against constructivism: but at least he referred to this article by Gerald Goldin so that I found it, and he also has this page with all kinds of references on constructivism.

One book mentioned there is by Kieran Egan (2002), Getting it Wrong from the Beginning: Our Progressivist Inheritance from Herbert Spencer, John Dewey, and Jean Piaget. This fits Van Hiele’s rejection of Piaget’s theory of stages of development. But does Egan refer to Van Hiele ? Not likely, since the wikipedia portal speaks about the constructivist “idea that things (especially learning) always go from simple to complex” – and this is not how Van Hiele would phrase it: who discussed going from concrete to abstract, and who used the notion of proof to identify the highest level of abstraction.

Wilbrink also has a quote on Von Glasersfeld:

“The basic idea of The Georgia Center was to establish a community of researchers in mathematics education working on problems of interest to the community, where the experience of the researcher, conceptual analysis, and social interaction replaced the controlled experiment as “normal science.” No longer did it seem necessary to use the controlled experiment with its emphasis on statistical tests of null hypotheses and empirical generalization to claim that one was working scientifically.”

This is complex. Before you do such a costly double blind randomized trial, with the huge numbers required because of the large number of variables, variety in pupils, and sources for measurement error (see John Hattie), it is useful to have clarity on concepts, definitions, operationalisations, methods, controls, and the like. Confronted with annual unpredictable changes from the Ministry of Education, you might want to give up on such statistical ambitions, and settle for the Google “do no evil” approach. It may well be that modern MER only serves for Ph.D. students to defend a thesis, and the relevance for education may be discussed at the reception party afterwards along with the faculty gossip.

The increasingly popular Japanese Lesson Study is one promising method (tested under Japanese conditions) to deal with the data problem.

However, see the suggestion for Academic Schools modeled after the Medical School, also included in A child wants nice and no mean numbers (2015).

Goldin’s crucial blind spot

What I consider Goldin’s blind spot is that he lumps together science and mathematics, while mathematics is no empirical science but deals with abstraction and patterns.

Goldin (2003), p179

Goldin (2003), p179

Education is an empirical issue. Also mathematics education is an empirical issue. Thus the involvement of mathematicians in such education can be disastrous, when they are trained for abstraction μ and not for empirical science γ. The epitome of the abstract mathematician who got lost in this is Hans Freudenthal who invented a whole new ‘reality’ just to make sure that at least he himself understood and was happy how the world works (including the oubliette for Pierre van Hiele).

The only reason that Goldin lumps together β = δ + μ is that he is so much worried by the ‘isms’ by α + Φ that he forgets about the real problem at bottom of the case: the disastrous influence of μ in 5000 years of education of mathematics. (Fractions were already a problem for the pyramids.)

Check out this example of mathematical torture of kids on fractions. This torture is also supported by professor Hung-Hsi Wu of Berkeley for the US Common Core programme, see A child wants nice and no mean numbers.

Goldin (2003) p180 suggests a seemingly good argument for lumping together science and mathematics.

Goldin (2003) p180

Goldin (2003) p180

Thus, the abstract thinking mathematician has a special trick to describe the physical world ? Without lookin ? Like with Wigner’s magic wand ? I don’t think that we should believe this. It is physics that selects the useful model from the mathematical possibilities. Thus:

  • This misconception about the role of mathematics may help explain why Goldin (2003) does not quite see the disastrous influence of abstract thinking mathematicians upon ME and MER. Golding does make some comments that mathematicians should not think that ME is simple and can be tested as in behaviourism, but he misses the fundamental problem as discussed in Elegance with Substance.
  • The remark about mathematics and empirical modeling remains relevant for the definition & reality methodology. It supports the empirical status of say the definition / law of conservation of energy, and it supports the empirical status of Van Hiele’s theory of levels of insight (abstraction) in epistemology (and application in psychology).

(1) We can support Goldin’s conclusion and plea for eclecticism (yes, another ‘ism’).

Goldin (2003), p198

Goldin (2003), p198

(2) Since the Freudenthal Head in the Clouds Realistic Mathematics Institute (FHCRMI) in Utrecht doesn’t do MER but performs sectarian rituals, also based upon Freudenthal’s fraud, it is a disgrace to general science – including the humanities – and thus it should be abolished as soon as possible. Dutch Parliament better investigates how this could have happened and endured for so long.

Appendix 1. Kurt Gödel

W.r.t. the following I can only refer to A Logic of Exceptions (1981, 2007, 2011) (pdf online). For interesting systems the Gödeliar collapses to the Liar paradox, with no sensical conclusions.

Goldin (2003), p187

Goldin (2003), p187

Appendix 2. Philosophy of mathematics

W.r.t. the following I can refer to the discussion on Wigner on the “unreasonable effectiveness of mathematics”. Given the common meaning of “unreasonable” Wigner must refer to magic, or he didn’t know what he was writing about, as a physics professor lost in the English language. It is some kind of magic that his paper got so much attention. This discussion has also been included in Foundations of Mathematics. A Neoclassical Approach to Infinity (2015) (pdf online). Goldin uses the word “extraordinary” rather than “unreasonable”. Given that the effectiveness is ordinary for physics, he seems to take the humanities’ point of view here (whom his article is addressed at).

Goldin (2003), p188

Goldin (2003), p188

The earlier discussion on Stellan Ohlsson brought up the issue of abstraction. It appears useful to say a bit more on terminology.

An unfortunate confusion at wikipedia

Wikipedia – no source but a portal – on abstraction creates a confusion:

  1. Correct is: “Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose.” Thus there is a distinction between abstract and concrete.
  2. Confused is: “For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, eliminating the other characteristics of that particular ball.” However, the distinction between abstract and concrete is something else than the distinction between general and particular.
  3. Hopelessly confused is: “Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. (…) Bacon used and promoted induction as an abstraction tool, and it countered the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle.” This is hopelessly confused since abstraction and generalisation (with possible induction) are quite different. (And please correct for what Bacon suggested.)

A way to resolve such confusion is to put the categories in a table and look for examples for the separate cells. This is done in the table below.

In the last row, the football itself would be a particular object, but the first statement refers to the abstract notion of roundness. Mathematically only an abstract circle can be abstractly round, but the statement is not fully mathematical. To make the statement concrete, we can refer to statistical measurements, like the FIFA standards.

The general statement All people are mortal comes with the particular Socrates is mortal. One can make the issue more concrete by referring to say the people currently alive. When Larry Page would succeed in transferring his mind onto the Google supercomputer network, we may start a philosophical or legal discussion whether he still lives. Mutatis mutandis for Vladimir Putin, who seems to hope that his collaboration with China will give him access to the Chinese supercomputers.

Category (mistake) Abstract Concrete
General The general theory of relativity All people living on Earth in 2015 are mortal
Particular The football that I hold is round The football satisfies FIFA standards
The complex relation between abstract and general

The former table obscures that the relation between abstract and general still causes some questions. Science (Σ) and philosophy (Φ) strive to find universal theories – indeed, a new word in this discussion. Science also strives to get the facts right, which means focusing on details. However, such details basically relate to those universals.

The following table looks at theories (Θ) only. The labels in the cells are used in the subsequent discussion.

The suggestion is that general theories tend to move into the abstract direction, so that they become universal by (abstract) definition. Thus universal is another word for abstract definition.

A definition can be nonsensical, but Σ strives to eliminate the nonsense, and officially Φ has the same objective. A sensible definition can be relevant or not, depending upon your modeling target.

(Θ) Aspects of scientific theories (Σ) Science (Φ) Philosophy
(A) Abstract definition (developed mathematically or not) (AΣ) Empirical theory. For example law of conservation of energy, economics Y = C + S, Van Hiele levels of insight (AΦ) Metaphysics
(G) General (GΣ) Statistics (GΦ) Problem of induction
(R) Relation between (A) and (G) (RΣ) (a) Standards per field,
(b) Statistical testing of GΣ,
(c) Definition & Reality practice
(RΦ) (a) Traditional epistemology,
(b) Popper,
(c) Definition & Reality theory

Let us redo some of the definitions that we hoped to see at wikipedia but didn’t find there.

Abstraction is to leave out elements. Abstractions may be developed as models for the relevant branch of science. The Van Hiele levels of insight show how understanding can grow.

A general theory applies to more cases, and intends to enumerate them. Albert Einstein distinguished the special and the general theory of relativity. Inspired by this approach, John Maynard Keynes‘s General Theory provides an umbrella for classical equilibrium (theory of clearing markets) and expectational equilibrium (confirmation of expectations doesn’t generate information for change, causing the question of dynamic stability). This General Theory does not integrate the two cases, but merely distinguishes statics and its comparative statics from dynamics as different approaches to discuss economic developments.

Abstraction (A) is clearly different from enumeration (G). It is not impossible that the enumeration concerns items that are abstract themselves again. But it suffices to assume that this need not be the case. A general theory may concern the enumeration of many particular cases. It would be statistics (GΣ) to collect all these cases, and there arises the problem of induction (GΦ) whether all swans indeed will be white.

Having both A and G causes the question how they relate to each other. This question is studied by R.

This used to be discussed by traditional epistemology (RΦ(a)). An example is Aristotle. If I understand Aristotle correctly, he used the term physics for the issues of observations (GΣ) and metaphysics for theory (AΦ & GΦ). I presume that Aristotle was not quite unaware of the special status of AΣ, but I don’t know whether he said anything on this.

Some RΦ(a) neglect Σ and only look at the relation between GΦ and AΦ. It is the price of specialisation.

Specialisation in focus is also by statistical testing (RΣ(b)) that only looks at statistical formulations of general theories (GΣ).

The falsification theory by Karl Popper may be seen as a philosophical translation (RΦ(b)) of this statistical approach (RΣ(b)). Only those theories can receive Popper’s label “scientific” that are formulated in such manner that they can be falsified. A black swan will negate the theory that all swans are white. (1) One of Popper’s problems is the issue of measurement error, encountered in RΣ(b), with the question how one is to determine sample size and level of confidence. Philosophy may only be relevant if it becomes statistics again. (2) A second problem for Popper is that AΣ is commonly seen as scientific, and that only their relevance can be falsified. Conservation of energy might be relevant for Keynes’s theory, but not necessarily conversely.

The Definition & Reality methodology consists of theory (RΦ(c)) and practice (RΣ(c)). The practice is that scientists strive to move from the particular to AΣ. The theory is why and how. A possible intermediate stage is G but at times direct abstraction from concreteness might work too. See the discussion on Stellan Ohlsson again.


Apparently there exist some confusing notions about abstraction. These can however be clarified, see the above.

The Van Hiele theory of levels of insight is a major way to understand how abstraction works.

Paradoxically, his theory is maltreated by some researchers who don’t understand how abstraction works. It might be that they first must appreciate the theory before they can appreciate it.

Mathematics education research (MER) not only looks at the requirements of mathematics and the didactics developed in the field itself, but also at psychology on cognition, learning and teaching in general, at pedagogy on the development of pupils and students, and at other subjects, such as physics or economics for cases when mathematics is applied, or general philosophy indeed. The former weblog text said something about neuro-psychology. Today we have a look at cognitive psychology.

Stellan Ohlsson: Deep learning

Stellan Ohlsson (2011) Deep Learning: How the Mind Overrides Experience may be relevant for mathematics education. One teaching method is to get students to think about a problem until the penny drops. For this, Ohlsson discusses a bit more than the distinction between old and new experience:

“(…) the human mind also possesses the ability to override experience and adapt to changing circumstances. People do more than adapt; they instigate change and create novelty.” (cover text)

“If prior experience is a seriously fallible guide, learning cannot consist solely or even primarily of accumulating experiences, finding regularities therein and projecting those regularities onto the future. To successfully deal with thoroughgoing change, human beings need the ability to override the imperatives of experience and consider actions other than those suggested by the projection of that experience onto the situation at hand. Given the turbulent character of reality, the evolutionary strategy of relying primarily on learned rather than innate behaviors drove the human species to evolve cognitive mechanisms that override prior experience. This is the main theme of this book, so it deserves a label and an explicit statement:

The Deep Learning Hypothesis

In the course of shifting the basis for action from innate structures to acquired knowledge and skills, human beings evolved cognitive processes and mechanisms that enable them to suppress their experience and override its imperatives for action.” (page 21)

Stellan Ohlsson's book (2011) (Source: CUP)

Stellan Ohlsson’s book (2011) (Source: CUP)

Definition & Reality methodology

The induction question is how one can know whether all swans are white. Even a statistical statement runs into the problem that the error is unknown. Skepticism that one cannot know anything is too simple. Economists have the question how one can make a certain general statement about the relation between taxation and unemployment.

My book DRGTPE (2000, 2005, 2011) (PDF online) (though dating from 1990, see the background papers from 1992) proposes the Definition & Reality methodology. (1) The model contains definitions that provide for certainty. Best would be logical tautologies. Lack of contrary evidence allows room for other definitions. (2) When one meets a black “swan” then it is no swan. (3) It is always possible to choose a new model. When there are so many black “swans” that it becomes interesting to do something with them, then one can define “swan2”, and proceed from there. Another example is that in one case you must prove the Pythagorean Theorem and in the other case you adopt it as a definition for the distance metric that gives you Euclidean space. The methodology allows for certainty in knowledge but of course cannot prevent surprises in empirical application or future new definitions. The methodology allows DRGTPE to present a certain analysis about a particular scheme in taxation – the tax void – that causes needless unemployment all over the OECD countries.

Karl Popper (1902-1994) was trained as a psychologist, and there met with the falsification approach by Otto Selz (1881-1943). Popper turned this into a general philosophy of science. (Perhaps Selz already thought in that direction though.) The Definition & Reality methodology is a small amendment to falsificationalism. Namely, definitions are always true. Only their relevance for a particular application is falsifiably. A criterion for a scientific theory is that it can be falsified, but for definitions the strategy is to find general applicability and reduce the risk of falsification. In below table, Pierre van Hiele presented his theory of levels of insight as a general theory of epistemology, but it is useful to highlight his original application to mathematics education, with the special property of formal proof. Because of this concept of proof, mathematics may have a higher level of insight / abstraction overall. Both mathematics and philosophy also better take mathematics education research as their natural empirical application, to avoid the risk of getting lost in abstraction.

Addendum September 7: The above assumes sensible definitions. Definitions might be logically nonsensical, see ALOE or FMNAI. When a sensible definition doesn’t apply to a particular situation, then we say that it doesn’t apply, rather than that it would be untrue or false. An example is an econometric model that consists of definitions and behavioural equations. A definition that has no relevance for the topic of discussion is not included in that particular model, but may be of use in another model.

(Un-) certainty Definitions Constants Contingent
Mathematics Euclidean space Θ = 2π ?
Physics Conservation of energy Speed of light Local gravity on Earth
Economics Savings are income minus consumption Institutional (e.g. annual tax code) Behavioural equations
Mathematics education Van Hiele levels of insight Institutional Student variety

To my great satisfaction, Ohlsson (2011:234) adopts basically the same approach.

“The hypothetical process that supposedly transforms particulars into abstractions is called induction and it is often claimed to operate by extracting commonalities across multiple particulars. If the first three swans you ever see are white, the idea swans are white is likely to come to mind. However, the notion of induction is riddled with problems. How are experiences grouped for the purpose of induction? That is, how does the brain know which experiences are instances of some abstraction X, before that abstraction has been learned? How many instances are needed? Which features are to be extracted? How are abstractions with no instances in human experience such as the infinite, the future and perfect justice acquired?”

Definition of abstraction

There is an issue w.r.t. the definition of abstraction though. Compare:

  • My definition of abstraction is leaving out aspects, see here on this weblog, and see FMNAI. My suggestion is that thought itself consist of abstractions. Abstraction depends upon experience since experience feeds brain and mind, but abstraction does not depend upon repeated experience.
  • Ohlsson (2011:16) takes it as identical to induction, which explains the emphasis upon experience in his title, rather taken as repetition: “Memories of individual events are not very useful in themselves, but, according to the received view, they form the raw material for further learning. By extracting the commonalities across a set of related episodic memories, we can identify the underlying regularity, a process variously referred to as abstraction, generalization or induction.” For Ohlsson, thoughts do not consists of abstractions, but of representations (models): “In the case of human cognition – or the intellect, as it would have been called in the 19th century – the relevant stuff consists of representations. Cognitive functions like seeing, remembering, thinking and deciding are implemented by processes that create, utilize and revise representations.” and “Representations are structures that refer to something (other than themselves).” (page 29)

Ohlsson has abstraction ⇔ induction (commonality). For me it is dubious whether induction really exists. The two pathways are too different to use equivalence. (i) Comparing A and B, one must first abstract from A and then abstract from B, before one may decide whether those abstractions are the same, and before one can even say that A and B share a commonality. (ii) An abstract idea like a circle might cause an “inductive” statement that all future empirical circles will tend to be round, but this isn’t really what is meant by “induction” – which is defined as the “inference” from past swans to future swans.

For me, an abstraction can be a model too, and thus would fit Ohlsson’s term representation, but the fact that he chooses abstraction ⇔ induction rather than abstraction ⇔ representation causes conceptual problems. Ohlsson’s definition of abstraction seems to hinder his understanding of the difference between concrete versus abstract as used in mathematics education research (MER).

Concrete versus abstract

Indeed, Ohlsson suggests an inversion of how people arrive at insight:

“The second contribution of the constraint-based theory is the principle that practical knowledge starts out general and becomes more specific in the course of learning. There is a long-standing tradition, with roots in the beginnings of Western philosophy, of viewing learning as moving in the opposite direction, from particulars to abstractions. [ftnt 38 e.g. to Piaget] Particulars are given in perception while abstractions are human constructions, or so the ancient story goes.” (p234)

“The fundamental principle behind these and many other cognitive theories is that knowledge moves from concrete and specific to abstract and general in the course of learning.” (Ohlsson 2011:434 that states ftnt 38)

If I understand this correctly, and combine this with the earlier argument that general knowledge is based upon induction from specific memories, then we get the following diagram. Ohlsson’s theory seems inconsistent, since the specific memories must derive from specific knowledge but also presume those. Perhaps a foetus starts with a specific memory without knowledge, and then a time loop starts with cumulation over time, like the chicken-egg problem. But this doesn’t seem to be the intention.

Trying to understand Ohlsson's theory of knowledge

Trying to understand Ohlsson’s theory of knowledge

There is this statement on page 31 that I find confusing since now abstractions [inductions ?] depend upon representations, while earlier we had them derived from various memories.

“The power of cognition is greatly increased by our ability to form abstractions. Mathematical concepts like the square root of 2 and a four-dimensional sphere are not things we stumble on during a mountain hike. They do not exist except in our representations of them. The same is true of moral concepts like justice and fairness, as well as many less moral ones like fraud and greed. Without representation, we could not think with abstractions of any kind, because there is no other way for abstract entities to be available for reflection except via our representations of them. [ftnt 18]”

Ftnt 18 on page 402: “Although abstractions have interested philosophers for a long time, there is no widely accepted theory of exactly how abstractions are represented. The most developed candidate is schema theory. (…)”

My suggestion to Ohlsson is to adopt my terminology, so that thought, abstraction and representation cover the same notion. Leave induction to the philosophers, and look at statistics for empirical methods. Then eliminate representation as a superfluous word (except for representative democracy).

That said, we still must establish the process from concrete to abstract knowledge. This might be an issue of terminology too. There are some methodological principles involved however.

Wilbrink on Ohlsson

Dutch psychologist Ben Wilbrink alerted me to Ohlsson’s book – and I thank him for that. My own recent book A child wants nice and no mean numbers (CWNN) (PDF online) contains a reference to Wilbrink’s critical discussion of arithmetic in Dutch primary schools. Holland suffers under the regime of “realistic mathematics education” (RME) that originates from the Freudenthal “Head in the Clouds Realistic Mathematics” Institute (FHCRMI) in Utrecht. This FHCRMI is influential around the world, and the world should be warned about its dismal practices and results. Here is my observation that Freudenthal’s approach is a fraud.

Referring to Ohlsson, Wilbrink suggests that the “level theory by Piaget, and then include the levels by Van Hiele and Freudenthal too” (my translation) are outdated and shown wrong. This, however, is too fast. Ohlsson indeed refers to Piaget (stated ftnt 38) but Van Hiele and Freudenthal are missing. It may well be that Ohlsson missed the important insight by Van Hiele. It may explain why Ohlsson is confused about the directions between concrete and abstract.

A key difference between Van Hiele and Freudenthal

CWNN pages 101-106 discusses the main difference between Hans Freudenthal (1905-1990) and his Ph.D. student Pierre van Hiele (1909-2010). Freudenthal’s background was abstract mathematics. Van Hiele was interested from early on in education. He started from Piaget’s stages of development but rejected those. He discovered, though we may as well say defined, levels of insight, starting from the concrete to the higher abstract. Van Hiele presented this theory in his 1957 thesis – the year of Sputnik – as a general theory of knowledge, or epistemology.

Freudenthal accepted this as a thesis, but, mistook this as the difference between pure and applied mathematics. When Freudenthal noticed that his prowess in mathematics was declining, he offered himself the choice of proceeding his life with the history of mathematics or the education of mathematics. He chose the latter. Hence, he coined the phrase realistic mathematics education (RME), and elbowed Van Hiele out of the picture. As an abstract thinking mathematician, Freudenthal created an entire new reality, not caring about the empirical mindset and findings by Van Hiele. One should really read CWNN pages 101-106 for a closer discussion of this. Van Hiele’s theory on knowledge is hugely important, and one should be aware how it got snowed under.

A recent twist in the story is that David Tall (2013) rediscovered Van Hiele’s theory, but wrongly holds (see here) that Tall himself found the general value while Van Hiele had the misconception that it only applied to geometry. In itself it is fine that Tall supports the general relevance of the theory of levels.

The core confusion by Ohlsson on concrete versus abstract

The words “concrete” and “abstract” must not be used as absolutely fixed in exact meaning. This seems to be the core confusion of Ohlsson w.r.t. this terminology.

When a child plays with wooden blocks we would call this concrete, but our definition of thought is that thinking consists of abstractions, whence the meanings of the two words become blurred. The higher abstract achievement of one level will be the concrete base for the next level. The level shift towards more insight consists of compacting earlier insights. What once was called “abstract” suddenly is called “concrete”. The statement “from concrete to abstract” indicates both the general idea and a particular level shift.

Van Hiele’s theory is essentially a logical framework. It is difficult to argue with logic:

  1. A novice will not be able to prove laws or the theorems in abstract mathematics, even informally, and may even lack the notion of proof. Having achieved formal proof may be called the highest level.
  2. A novice will not be able to identify properties and describe their relationships. This is clearly less complex than (1), but still more complex than (3). There is no way going from (3) to (1) without passing this level.
  3. A novice best starts with what one knows. This is not applied mathematics, as Freudenthal fraudently suggested, but concerns the development of abstractions that are available at this level. Thus, use experience, grow aware of experience, use the dimensions of text, graph, number and symbol, and develop the thoughts about these.

Van Hiele mentioned five levels, e.g. with the distinction between informal and formal deduction, but this is oriented at mathematics, and above trident seems sufficient to establish the generality of this theory of knowledge. A key insight is that words have different meanings depending upon the level of insight. There are at least three different languages spoken here.

Three minor sources of confusion are

  • Ohlsson’s observation that one often goes from the general to the specific is correct. Children may be vague about the distinction between “a man” and “one man”, but as grown up lawyers they will cherish it. This phenomenon is not an argument against the theory of levels. It is an argument about becoming precise. It is incorrect to hold that “one man” is more concrete and “a man” more abstract.
  • There appears to exist a cultural difference between on one side Germans who tend to require the general concept (All men are mortal) before they can understand the particular (Socrates is mortal), and the English (or Anglo-Saxons who departed from Germany) who tend to understand only the particular and to deny the general. This cultural difference is not necessarily epistemological.
  • Education concerns knowledge, skill and attitude. Ohlsson puts much emphasis on skill. Major phases then are arriving at a rough understanding and effectiveness, practicing, mastering and achieving efficiency. One can easily see this in football, but for mathematics there is the interplay with the knowledge and the levels of insight. Since Ohlsson lacks the levels of insight, his phases give only part of the issue.

I have looked only at parts of Ohlsson’s book, in particular above sections that allow a bit more clarity on the relevance w.r.t. Van Hiele’s theory of levels of insight. Please understand my predicament. Perhaps I read more of Ohlsson’s book later on, but this need not be soon.

  • In mathematics education research (MER) we obviously look at findings of cognitive psychology, but this field is large, and it is not the objective to become a cognitive psychologist oneself.
  • When cognitive psychologists formulate theories that include mathematical abstraction, as Ohlsson does, let them please look at the general theory on knowledge by Pierre van Hiele, for this will make it more relevant for MER.
  • Perhaps cognitive psychologists should blame themselves for overlooking the theory by Pierre van Hiele, but they also should blame Hans Freudenthal, and support my letter to IMU / ICMI asking to correct the issue. They may work at universities that also have departments of mathematics and sections that deal with MER, and they can ask what happened.
  • When there is criticism on the theory by Van Hiele, please look first at the available material. There are summary statements on the internet, but these are not enough. David Tall looked basically at one article and misread a sentence (and his misunderstanding still was inconsistent with the article). For some references on Van Hiele look here. (There is the Van Hiele page by Ben Wilbrink, but, as said, Wilbrink doesn’t understand it yet.)