I applaud this chart in which he tabulates not only *causes and effects* but rather *means and goals*. (Clicking on the picture will bring you to the TED talk 2007, and at the end the audience may applaud for another reason, namely when he swallows a sword to illustrate that the “impossible is possible”.)

My impression is that we best honour Rosling by continuing the discussion about his work. Thus, my comments are as follows.

First of all, my book *Definition & Reality in the General Theory of Political Economy *shows that the Trias Politica model of democracy fails, because it allows politicians still too much room to manipulate information and to meddle in scientific advice on policy making. Thus, governance is much more important than Rosling suggested. Because of his analysis, Rosling in some of his simulations only used economic growth as the decisive causal factor to explain the development of countries. However, the key causal factor is governance. The statistical reporting on this is not well developed yet. Thus, I move one + from economic growth to governance.

Secondly, my draft book *The Tinbergen & Hueting Approach in the Economics of Ecological Survival* discusses that the environment has become a dominant risk for the world as we know it. It is not a mathematical certainty that there will be ecological collapse, but the very nature of ecological collapse is that it comes *suddenly,* when you don’t expect it. The ecology is so complex and we simply don’t have enough information to manage it properly. It is like standing at the edge of a ravine. With superb control you might risk to edge one millimeter closer, but if you are not certain that the ground will hold and that there will not be a sudden rush of wind, then you better back up. The table given by Rosling doesn’t reflect this key point. Thus, I move one + from economic growth to the environment.

In sum, we get the following adapted table.

I have contemplated for the means whether I would want to shift another + from economic growth to either human rights (property rights) or education (I am also a teacher). However, my current objective is to highlight the main analytical difference only.

In the continued discussion we should take care of proper definitions.

The term “economic growth” is confusing. There is a distinction between* level *and *annual growth* of income, and there is a distinction w.r.t. categories within. Economic welfare consists of both material products (production and services) and immaterial elements (conditions and services). If the term “economic growth” includes both then this would be okay. In that case, however, the whole table would already be included in the notion of welfare and economic growth. Apparently, Hans Rosling intended the term “economic growth” for the material products. I would suggest to replace his “economic growth” by “income level”, and thus focus on both *income* and *level* rather than annual change of a confusingly named statistic. Obviously, it is a policy target that all people would have a decent standard of living, but it is useful to remain aware that income is only a *means* to a higher purpose, namely to live a good life.

PM. This causes a discussion about the income distribution, and how the poor and the rich refer to each other, so that the notion of poverty is relative to the general standard of society. In the 1980s the computer was a luxury item and nowadays a cell-phone with larger capacity is a necessity. These are relevant aspects but a discussion would lead too far here now.

In the adapted table, the environment gets ++ as both means and goal. There is slight change of meaning for these separate angles.

- The environment as a
*goal*means that we want to preserve nature for our descendants. Our kids and grandchildren should also have tigers and whales in their natural habitat, and not as photographs only. - The environment as
*means*causes some flip-flop thinking.

(1) In economic thought, everything that exists either already existed or mankind has crafted it from what was given. Thus we only have (i) the environment, (ii) human labour. There are no other means available. From this perspective the environment deserves +++.

(2) For most of its existence (some 60,000 years), mankind took the environment for granted. Clear air and water where available, and if some got polluted it was easy to move to a next clean spot. The economic price of the environment was zero. (Or close to it: the cost of moving was not quite a burden or seen as an*economic*cost.) Thus, as a means, the environment didn’t figure, and from this viewpoint it deserves a 0. There are still many people who think in this manner. It might be an engrained cultural habit, but a rather dangerous one.

(3) Perhaps around the middle of the past century, the 1950s, the environment has become scarce. As Lionel Robbins explained: the environment has become an economic good. The environment provides functions for human existence and survival, and those functions now get a price. Even more, the Tinbergen & Hueting approach acknowledges that the ecology has become risky for human survival. The USA and Europe might think that they can outsource most environmental pollution to the poorer regions of the world, but when the rain forests turn into deserts and when the CO_{2}turns the oceans into an acid soup that eats away the bones of fish, then the USA and Europe will suffer the consequences too. In that perspective, the environment deserves +++.

(4) How can we make sure that the environment gets proper place in the framework of all issues ? Eventually, nature is stronger than mankind, and there might arise some natural correction. However, there is also governance. If we get our stuff together, then mankind might manage the world economy, save the environment at some cost, but still achieve the other goals. Thus governance is +++ and the environment is relative at ++. Thus we arrive at above adapted table.

As a teacher of mathematics I emphasize the combined presentation of *text, formula, numeric table, and graph*. By looking at these different angles, there is greater scope for integrated understanding. Some students are better at single aspects, but by presenting the four angles you cover the various types of students, and all students get an opportunity to develop the aspects that they are weaker in.

Obviously, dynamic simulation is a fifth aspect. See for example the Wolfram Demonstrations project. Many have been making applets in Java and embedding this in html5, yet the use of *Mathematica* would allow for more exchangeable and editable code and embedding within educational contexts in which the manipulation of *text, formula, numeric table, and graph *would also be standard.

Obviously, role playing and simulation games are a sixth aspect. This adds human interaction and social psychology to the learning experience. Dennis Meadows has been using this to allow people to grow aware of the risk on the environment, see e.g. “Stratagem” or MIT-Sloan.

What I particularly like about Rosling’s table is his emphasis on culture as a goal. Artists and other people in the world of culture will already be convinced of this – see also Roefie Hueting on the jazz stage – yet others may not be aware that mankind exists by culture.

There is also an important economic angle on culture as a means. In recessions and depressions, the government can stimulate cultural activity, such that money starts flowing again *with much less risk for competitive conditions. *That is, if the government would support the automobile industry or steel and do specific investments, then this might favour some industries or services at the cost of others, and it might affect competitive conditions overall, and even insert imbalances into the economy in some structural manner. Yet stimulating cultural activity might be much more neutral and still generate an economic stimulus.

For example, Germany around 1920 got into economic problems and the government responded by printing more money, and this caused the hyperinflation. This experience got ingrained in the German attitude towards monetary issues. In the Eurozone Germany follows the hard line that inflation should be prevented at all costs. Thus the eurozone now has fiat money that still functions as a gold standard because of the strict rules. (See my paper on this.) By comparison, when the USA around 1930 got into economic problems and the central bank was hesitant to print money (no doubt looking at the German example), this eventually caused the Great Depression. Thus monetary policy has the Scylla and Charybdis character, with the risks of either too little or too much. Potentially, the option to organise cultural activity would be a welcome addition to the instruments to avoid such risks and smooth the path towards recovery.

I am not quite suggesting that the ECB should print money to pay the unemployed in Greece, Italy, Spain and Portugal to make music and dance in the streets, yet, when the EU would invest in musea and restorations and other cultural services so that Northern Europe can better enjoy their vacations in Southern Europe, then this likely would be more acceptable than when such funds would be invested *directly* in factories that start to compete with the North. The current situation that Southern Europe has both unemployment and less funds to maintain the cultural heritage is obviously less optimal.

The point is also made in my book *Common Sense: Boycott Holland.* Just to be sure: this notion w.r.t. culture is not the main point of CSBH. It is just a notion that is worthy of mentioning*.
*

*PM. Imagine a dynamic simulation of restoring the Colosseum. Or is it culturally more valuable as a ruin than fully restored ?
*

]]>

This falsely portraying of a political opponent is a new low in the Low Countries.

The photoshopped picture would exist since 2009 but there are general elections for the Dutch House of Commons on March 15 which may be the reason why Wilders uses it now. Wilders might have limited campaign funds and the abuse of this picture is politically cunning, since hords of people, including me, are discussing it now. Attention is half of the job, and Wilders knows how to get attention. And when there is a terrorist attack, then he can claim that he has been warning all along.

Yet, the downside of this is, that there are feeble minds on the radical right, like Anders Breivik, who worship Wilders, and who might take this portrayal as an invitation to target Pechtold. The UK saw the assassination of Jo Cox in 2016. Holland already saw a smear campaign against Pim Fortuyn in 2002 who then got assassinated by an activist on the left. Yet a gunman in 2011 who killed six people was a sympathiser of Wilders. Journalist Peter Breedveld has been reporting consistently that the political climate in Holland is getting heated, repressive and threatening of violence. Pechtold is alarmed. He warned that Wilders is deliberately rousing up his followers. One sympathiser of Wilders already threatened Pechtold to kill him, and Pechtold informed reporters that he had to testify in court to get the man convicted. A close political friend of Pechtold, Els Borst, has been murdered by a lunatic in 2014, apparently without political motivation, but it still has an impact.

Geert Wilders and Alexander Pechtold have a history of feeding on each other. They are each other’s best enemies. While Wilders finds great profit in demonising Pechtold as the fellow-traveller of political islam, Pechtold finds great profit in portraying Wilders as indecent and “over the top”. Their political clash was the motor for their rise to public attention in 2006-2010. In the elections of 2010, Pechtold jumped from 3 to 10 seats, and Wilders from 9 to 24 seats.

The following graph shows the number of seats of Wilders (PVV, red) and Pechtold (D66, blue) in the Dutch House of Commons, with a total of 150 seats. (Source: Wikipedia, here adapted.)

- Wilders started in 2004 as a one-man separation of the Dutch conservative party VVD. The official line of VVD was that Turkey might eventually join the European Union, but Wilders disagreed, and wished to have the freedom to say so. The letters VVD stand for the
*People’s Party for Freedom and Democracy*, but party leader Gerrit Zalm denied Wilders his freedom of expression. In 2006 Wilders got 9 seats, in 2010 he jumped to 24, and in 2012 got 15. (Incidently: Gerrit Zalm had also participated in the smear campaign against Pim Fortuyn, labeling him as a “dangerous man”. Zalm also was the director of the CPB who in 1990 censored my work at CPB and who dismissed me there with falsehoods, the very issue that this weblog is about.) - In 2006, D66 had been reduced from 24 seats to 3, and Pechtold began as the new leader. There was talk about ending the party, yet Pechtold managed to get the party back to 10 seats. His strategy was to oppose Wilders.
- As said, in the elections of 2010, Pechtold jumped from 3 to 10 seats, and Wilders from 9 to 24 seats.
- In 2010-2012 there was the 1st Rutte Cabinet, a minority government with support by Wilders. This cabinet failed and collapsed, and at the subsequent elections in 2012 Wilders got 15 seats.

The major problem with D66 is that its party elite and its voters cannot think straight. The name D66 is an abbreviation of “Democrats 1966”, and the idea of founder Hans van Mierlo (1931-2010) was to improve democracy. Van Mierlo was from the Catholic south of Holland, and he was inspired by JFK in the USA. (See my weblog text on the Dutch Taliban.) Thus he suggested that Holland copied democratic conventions from the USA, like district voting, direct elected president and mayors, and referenda. Unfortunately, Van Mierlo had a degree in law and worked as a journalist, and he never really studied democracy. The membership of D66 are mostly lawyers too. They are mostly concerned about the “rule of law”, and less about what the law is about. By now, it should be obvious that Van Mierlo’s ideas about democracy have always been perverse, and actually reduce democracy. Yet, D66 doesn’t openly say so, and they still claim that they and their proposals would improve democracy. *Thus D66 is a fossilised lie about democracy.*

- Direct elections with districts causes that in the Bush, Gore and Nader elections, Bush got elected (and we got the lie on Iraq), and that with the Clinton & Trump election, that Trump got elected, while in terms of percentages Gore would have beaten Bush, and Clinton would have beaten Trump.
- For referenda, see this discussion about Brexit.

See my book *Voting theory for democracy* and this article about multiple seats elections.

Thus, when D66 collapsed to 3 seats, I hoped that D66 would be abolished, and that there would be room for a new political initiative, to combine sound ideas about democracy with sound ideas about economics and sound ideas about social compassion. Yet, there was Pechtold. He has a degree in art history and a working background as auctioneer, and developed further as a career politician. D66 apparently allows it, and eventually is grateful to him for “saving the party”, as if that would be so useful.

D66 has been applying its great logical capacities, that they already showed on democracy, also on the issue of Wilders and immigration. Supposedly Pechtold attacked Wilders, *but he actually made him bigger.* D66 and Pechtold cannot see this fact and this logic, since Pechtold “saved D66” by that jump from 3 to 10 seats. Clearly the attack by Pechtold on Wilders was a great success, namely see the growth of D66 ! Thus they keep themselves deliberately blind about that jump of Wilders from 9 to 24 seats.

The best answer to Wilders would be a party that combines sound ideas about democracy with sound ideas about economics and sound ideas about social compassion. Yet, Pechtold and D66 block this, because of their perverse ideas about democracy and their perverse claim that they have success in attacking Wilders.

Well, it is Holland. Boycott this country till it develops a respect for science so that it lifts the censorship of science since 1990 by the directorate of the Dutch Central Planning Bureau (CPB).

]]>

IES 2010 key advice number 3 is:

“Help students understand why procedures for computations with fractions make sense.”

The first example of this *helping to understand* is:

“A common mistake students make when faced with fractions that have unlike denominators is to add both numerators and denominators. [ref 88] Certain representations can provide visual cues to help students see the need for common denominators.” (Siegler et al. (2010:32), refering to Cramer, K., & Wyberg, T. (2009))

For *a* / *b *“and” *c */ *d *kids are supposed to find (*ad *+ *bc*) / (*bd*) instead of (*a *+ *c*) / (*b *+ *d*)*.*

Obviously this is a matter of definition. For “plus” we define: *a* / *b *+ *c */ *d *= (*ad *+ *bc*) / (*bd*).

But we can also define “superplus”: *a* / *b *⊕ *c */ *d = *(*a *+ *c*) / (*b *+ *d*)*.*

The crux lies in “and” that might not always be “plus”.

There are cases where (*a *+ *c*) / (*b *+ *d*) makes eminent sense. For example, when *a* / *b *is the batting average in the Fall-Winter season and *c */ *d *the batting average in the Spring-Summer season, then the annual (weighted) batting average is exactly (*a *+ *c*) / (*b *+ *d*). Kids would calculate correctly, and Siegler et al. (2010) are suggesting that the kids would make a wrong calculation ?

The “superplus” outcome is called the “mediant“. See a Wolfram Demonstrations project case with batting scores.

Adding up fractions of the same pizza thus differs from averaging over more pizzas.

We thus observe:

- Kids live in a world in which (
*a*+*c*) / (*b*+*d*) makes eminent sense. - Telling them that this is “a mistaken calculation” is actually quite confusing for them.
- Thus it is better teaching practice to explain to them when it makes sense.

There is no alternative but to explain Simpson’s paradox also in elementary school. See the discussion about the paradox in the former weblog entry. The issue for today is how to translate this to elementary school.

Many examples of Simpson’s paradox have larger numbers, but the Kleinbaum et al. (2003:277) “ActivEpi” example has small numbers (see also here). I add one more to make the case less symmetrical. Kady Schneiter rightly remarked that an example with cats and dogs will be more appealing to students. She uses size (small or large pets) as a factor, but let me stick to the idea of gender as a confounder. Thus the kids in class can be presented with the following case.

- There are 17 cats and 16 dogs.
- There are 17 pets kept in the house and 16 kept outside.
- There are 17 male pets and 16 female pets (perhaps “helped”).

There is the phenomenon – though kids might be oblivious why this might be “paradoxical”:

- For the male pets, the proportion of cats in the house is
*larger*than the proportion for dogs. - For the female pets, the proportion of cats in the house is
*larger*than the proportion for dogs. - For all pets combined, the proportion of cats in the house is
*smaller*than the proportion for dogs.

The paradoxical data are given as follows. Observe that kids must calculate:

- For the cats: 6 / 7 = 0.86, 2 / 10 = 0.20 and (6 + 2) / (7 + 10) = 0.47.
- For the dogs: 8 / 10 = 0.80, 1 / 6 = 0.17 and (8 + 1) / (10 + 6) = 0.56.

Perhaps the major didactic challenge is to explain to kids that the outcome must be seen as “paradoxical”. When kids might not have developed “quantitative intuitions” then those might not be challenged. It might be wise to keep it that way. When data are seen as statistics only, then there might be less scope for false interpretations.

Obviously, though, one would discuss the various views that kids generate, so that they are actively engaged in trying to understand the situation.

The next step is to call attention to the sum totals that haven’t been shown above.

It is straightforward to observe that the *M* and *F* are distributed in unbalanced manner.

It can be an argument that there should be equal numbers of *M *and *F. *This causes the following calculations about what pets would be kept at the house. We keep the observed proportions intact and raise the numbers proportionally.

- For the cats: 0.86 * 10 = 9, and (9 + 2) / (10 + 10) = 0.55.
- For the dogs: 0.17 * 10 = 2, and (8 + 2) / (10 + 10) = 0.50.

And now we find: Also for all pets combined, the proportion of cats in the house is *larger* than the proportion for dogs. Adding up the subtables into the grand total doesn’t generate a different conclusion on the proportions.

Perhaps kids at elementary school should not bothered with discussions on causality, certainly not on a flimsy case as this. But perhaps some kids require closure on this, or perhaps the teacher does. In that case the story might be that the kind of pet is the cause, and that the location where the pet is kept is the effect. When people have a cat then they tend to keep it at home. When people have a dog then are a bit more inclined to keep it outside. The location has no effect on gender. The gender of the pet doesn’t change by keeping it inside or outside of the house.

Pierre van Hiele (1909-2010) explained for most of his professional life that kids at elementary school can understand vectors. Thus, they should be able to enjoy this vector graphic by Alexander Bogomolny.

Van Hiele also proposed to abolish fractions as we know them, by replacing *y */ *x* by *y **x^*(-1). The latter might be confusing because kids might think that they have to subtract something. But the mathematical constant *H *= -1 makes perfect sense, namely, check the unit circle and the complex number *i. *Thus we get *y */ *x* = *y **x ^{H}*. The latter would be the better format. See

Some conclusions are:

- What Siegler & IES 2010 call a “common mistake” is the proper approach in serious statistics.
- Teaching can improve by explaining to kids what method applies when. Adding fractions of the same pizza is different from calculating a statistical average. (PM. Don’t use round pizza’s. This makes for less insightful parts.)
- Kids live in a world in which statistics are relevant too.
- Simpson’s paradox can be adapted such that it may be tested whether it can be discussed in elementary school too.
- The discussion corroborates Van Hiele’s arguments for vectors in elementary school and the abolition of fractions as we know them (
*y*/*x*) and the use of*y**x*with^{H}*H*= -1. The key thing to learn is that there are numbers*x*such that^{H}*x x*= 1 when^{H}*x ≠*0, and the rest follows from there.

PM. The excel sheet for this case is: 2017-01-30-data-from-kleinbaum-2003

]]>

Judea Pearl in his wonderful book “Causality” (1ste edition 2000, my copy 2007) of which there now is a 2nd edition, took issue with statistics, and looked for a way to get from correlation to causality. His suggestion is the “do”-statement. I am still pondering about this. For now I tend to regard it as manipulating in models with endogeneity and exogeneity of variables. Please allow me my pondering: some issues require time. See here for an earlier suggestion on causality, one on the counterfactual, and one on confounding. Some earlier papers on the 2 x 2 x 2 case are here. Today I want to look a bit at Simpson’s paradox with an eye on education.

In graphs, the horizontal *x* axis gives the cause and the vertical *y *axis gives the effect. For the derivative we look at d*y */ d*x.* Thus in numerical tables we better put the *y *in the top row and the *x *in the bottom row.

For 2 x 2 tables the lowest row is the sum of the rows above. Since this lowest row better be the cause, we thus better put the cause in vertical columns and the effect in horizontal rows. This seems a bit of a paradox, but see the presentation below.

(This is similar to when we have the true state (disease) (gold standard) vertically and the test statistic (test) in the rows, when we determine the sensitivity and specificity of the test. Check the wikipedia “worked example“, since the main theory is transposed.)

Pearl (2013) “*Understanding Simpson’s Paradox*” (technical report R-414) has a transposed table. It is better to transpose back. He also mentions the combined group first but it seems better to put this at the end. (PM. A recent discussion by Pearl on Simpson’s paradox is here.)

The following are the data from Pearl (2013), the appendix, figure 4, page 10. The data are the count of the individuals involved. Both men and women are treated (cause) or not, and they recover (effect) or not. Since this is a controlled trial, we do not need to look at prevalence and such.

When we divide the effect (row 1) by the total (row 3) then we get the recovery rates (row 4). We do this for the men, women and joint (combined, pooled) data. We find the paradoxical situation:

- For the men, the treatment causes reduced recovery (0.6 < 0.7).
- For the women, the treatment causes reduced recovery (0.2 < 0.3).
- For all combined, the treatment causes improved recovery (0.5 > 0. 4).

We may arrange issues in “cause” and “effect”, but the real relations are determined by reality. Data like these might be available for various models. Pearl (2013) figure 1 mentions more models, but let us consider cases (a) and (b). In the above we have been assuming model (a) on the left, with a path from cause *X *to effect *Y, *in which variable *Z *(gender) is causally independent. Above data table however would also fit the format of model (b), in which variable *Z *(blood pressure) would not be independent, and might be confounding issues.

Perhaps the gender is actually confounding the situation in above table too ? The result of the table is so strange that we perhaps must revise our ideas about the causal relations that we have been assuming.

Pearl’s condition for causality is that “the drug has no effect on gender”, see p10 and his formula (7) (with there *F* rather than here *Z*). The above data show that there is an effect, or, when we e.g. look at the women, that Pr[Female | Cause] and Pr[Female | No cause] are different, and thus differ from the marginal probability Pr[Female].

In the table above, we compare line (7) of all women with line (11) of all patients. The women are only 25% of all treated patients and 75% of all untreated ones. Perhaps the treatment has no effect on gender, but the data would suggest otherwise.

It would be sufficient (not necessary) to adjust the subgroup sizes, such that there is “equal representation”. NB. Pearl refers here to the “sure thing principle” apparently formulated by Savage 1954, which condition doesn’t modify the distribution. For us, the condition and proof of equal representation has another relevance now.

Since this is a controlled trial, we can adapt by including more patients, such that the numbers in the different subgroups (rows (3) and (7), below in red) are equal. This involves 40 more patients, namely 20 men in the non-treatment group and 20 women in the treatment group. This generates the following table.

For ease, it is assumed that the conditional probabilities of the subgroups – thus rows (4) and (8) – remain the same, and that the new patients are distributed accordingly. Of course, they might deviate from this, but then we have better data anyway.

The consequence of including adequate numbers of patients in the subgroups is:

- Row (13) now shows that Pr[
*Z*|*C*] = Pr[*Z*|*Not-C*] = Pr[*Z*], for*Z*= M or F. - As the treatment is harmful in both subgroups, it also is harmful for the pooled group.

Obviously, when the original data already allow an estimate of the harmful effect, it would not be ethical to subject 20 more women to the treatment – while it might be easy to find 20 more men who don’t have the treatment. Thus, it suffices to use the above as a statistical correction only. If we assume the same conditional probabilities w.r.t. the cause-effect relation in the subgroups, then the second table gives the counterfactual as if the subgroups had the same number of patients. There would be no occurrence of the Simpson paradox.

This counterfactual would also hold in cases when we cannot simply adjust the group sizes, like the classic case of admissions of students to Berkeley.

While the causality that the drug has no effect on gender is quite clear, the situation is less obvious w.r.t. the issue on blood pressure. In this case it might not be possible to get equal numbers in the subgroups. Not for ethical reasons but because people react differently on the treatment. This case would require a separate discussion, for the causality clearly is different.

There are some sites for a first encounter with Simpson’s paradox.

A common plot is labelled Baker – Kramer 2001 but earlier were Jeon – Chung – Bae 1987. This plot keeps the number of men and women and the conditional probabilities the same, and allows only variation over the enrollments in the subgroups. This nicely shows the composition effect. The condition of equal percentages per subgroup works, but there are also other combinations that avoid Simpson’s paradox. But of course, Pearl is interested in causality, and not the mere statistical effect of composition.

The most insightful plot seems to be from *vudlab. *It has upward sloping lines rather than downward sloping ones, which somewhat seems easier to follow. There is a (seemingly) continuous slider, it rounds the person counts, and it has a graphic for the percentages that makes it easier to focus on those.

Kady Schneiter has various applets on statistics, of which this one on Simpson’s paradox. I agree with her discussion (*Journal of Statistics Education* 2013) that an example with pets (cats and dogs) lowers the barrier for understanding. Perhaps we should not use the size of the pet (small or large) but still gender. The plot uses downward sloping lines and has an unfortunate lag in the display of the light blue dot. (This might be dogs, but we can also compare with the Berkeley case in vudlab.)

The Wolfram Demonstrations by (1) Heiner & Wagon and (2) Brodie provide different formats that may come into use too. The advantage of the latter is that you can put in your own numbers.

This discussion by Andrew Gelman caused me to google on these displays.

Alexander Bogomolny has a fine vector display but there is no link to causality (yet).

Robert Banis has some data from the original Berkeley study, and excel sheets using them.

Some ten years ago there would have been more references to excel sheets indeed, with the need for students to do some editing themselves. The educational attention apparently shifts to applets with sliders. For those with still an interest in excel, the sheet with above tables is here: 2017-01-28-data-from-pearl-2000.

And of course there is wikipedia (a portal, no source). (Students from MIT are copying their textbooks into wikipedia, whence the portal becomes unreadable for the common reader. It definitely cannot be used as an educational source.)

This sets the stage for another kind of discussion in the next weblog entry.

]]>

Exponential functions are easily introduced as growth processes. The comparison of *x*² and 2^*x *is an eye-opener, with the stories of duckweed or the grain on the chess board. The introduction of the exponential number *e *is a next step. What intuitions can we use for smooth didactics on *e* ?

There is the following “intuitive graph” for the exponential number *e *= 2,71828…. The line *y *= *e *is found by requiring that the inclines (tangents) to *y *= *b ^{x}* all run through the origin at {0, 0}. The (dashed) value at

Remarkably, Michael Range (2016:xxix) also looks at such an outcome *e *= 2^(1 / *c*), where *c *is the derivative of *y *= 2^*x* at *x *= 0, or *c *= ln[2]. NB. Instead of the opaque term “logarithm” let us use “recovered exponent”, denoted as rex[*y*].

Perhaps above plot captures a good intuition of the exponential number ? I am not convinced yet but find that it deserves a fair chance.

NB. Dutch mathematics didactician Hessel Pot, in an email to me of April 7 2013, suggested above plot. There appears to be a Wolfram Demonstrations Project item on this too. Their reference is to Helen Skala, “A discover-e,” *The College Mathematics Journal*, **28**(2), 1997 pp. 128–129 (Jstor), and it has been included in the “Calculus Collection” (2010).

The point-slope version of the incline (tangent) of function *f*[*x*] at *x *= *a *is:

*y – f*[*a*] = *s* (*x *– *a*)

The function *b*^*x *has derivative rex[*b*] *b*^*x. *Thus at arbitrary *a*:

*y – b*^*a* = rex[*b*] *b*^*a* (*x *– *a*)

This line runs through the origin {*x*, *y*} = {0, 0} iff

0* – b*^*a* = rex[*b*] *b*^*a* (0 – *a*)

1 = rex[*b*] *a*

Thus with *H * = -1,* a *= rex[*b*]* ^{H} *= 1 / rex[

*y* = *f*[*a*] *= **b*^*a *= *b^*rex[*b*]* ^{H}* =

The inclines running through {0, 0} also run through {rex[*b*]* ^{H}*,

For example, in above plot, with 2^*x* as the red curve, rex[2] ≈ 0.70 and *a *≈ 1.44, and there we find the intersection with the line *y *= *e.*

Subsequently also at *a *= 1, the point of tangency is {1, *e*}, and we find with *b *= *e* that rex[*e*] = 1,

The drawback of this exposition is that it presupposes some algebra on *e *and the recovered exponents. Without this deduction, it is not guaranteed that above plot is correct. It might be a delusion. Yet since the plot is correct, we may present it to students, and it generates a sense of wonder what this special number *e *is. Thus it still is possible to make the plot and then begin to develop the required math.

Another drawback of this plot is that it compares different exponential functions and doesn’t focus on the key property of *e*^*x, *namely that it is its own derivative. A comparison of different exponential functions is useful, yet for what purpose exactly ?

Our recent weblog text discussed how Cartesius used Euclid’s criterion of tangency of circle and line to determine inclines to curves. The following plots use this idea for *e*^*x* at point *x *= *a, *for *a *= 0 and *a *= 1.

Let us now *define *the number *e *such that the derivative of *e*^*x* is given by *e*^*x* itself. At point *x *= *a *we have *s *= *e*^*a. *Using the point-slope equation for the incline:

*y – f*[*a*] = *s* (*x *– *a*)

*y – **e*^*a* = *e^a* (*x *– *a*)

*y * = *e^a* (*x *– (*a* – 1))

Thus the inclines cut the horizontal axis at {*x, y*} = {*a *– 1, 0}, and the slope indeed is given by the tangent *s* = (*f*[a] – 0) / (*a *– (*a *– 1)) = *f*[*a*] / 1 = *e*^*a. *

The center {*u*, 0} and radius *r* of the circle can be found from the formulas of the mentioned weblog entry (or Pythagoras), and check e.g. *a *= 0:

*u *= *a *+ *s **f*[*a*] = *a *+ (*e*^*a*)²

*r *= *f*[*a*] √ (1 + *s*²) = *e*^*a √ (*1 + (*e*^*a*)²)

A key problem with this approach is that the notion of “derivative” is not defined yet. We might plug in any number, say *e*^2 = 10 and *e^*3 = 11. For any location the Pythagorean Theorem allows us to create a circle. The notion of a circle is not essential here (yet). But it is nice to see how Cartesius might have done it, if he had had *e *= 2.71828….

*Conquest of the Plane* (2011:167+), pdf online, has the following approach:

- §12.1.1 has the
**intuition**of the “fixed point” that the derivative of*e*^*x*is given by*e*^*x*itself. For didactics it is important to have this property firmly established in the minds of the students, since they tend to forget this. This might be achieved perhaps in other ways too, but COTP has opted for the notion of a fixed point. The discussion is “hand waiving” and not intended as a real development of fixed points or theory of function spaces. - §12.1.2
**defines***e*with some key properties. It holds by definition that the derivative of*e*^*x*is given by*e*^*x*itself, but there are also some direct implications, like the slope of 1 at*x*= 0. Observe that COTP handles integral and derivative consistently as interdependent notions. (Shen & Lin (2014) use this approach too.) - §12.1.3 gives the
**existence proof**. With the mentioned properties, such a number and function appears to exist. This compares*e*^*x*with other exponential functions*b*^*x*and the recovered exponents rex[*y*] – i.e. logarithm ln[*y*]. - §12.1.4 uses the chain rule to find the derivatives of
*b*^*x*in general. The plot suggested by Hessel Pot above would be a welcome addition to confirm this deduction and extension of the existence proof. - §12.1.5-7 have some relevant aspects that need not concern us here.
- §12.1.8.1 shows that the definition is
**consistent**with the earlier formal definition of a derivative. Application of that definition doesn’t generate an inconsistency.*No limits are required*. - §12.1.8.2 gives the
**numerical development**of*e*= 2.71828… There is a clear distinction between deduction that such a number exists and the calculation of its value. (The approach with limits might confuse these aspects.) - §12.1.8.3 shows that also the notion of the dynamic quotient (COTP p57) is
**consistent**with above approach to*e.*Thus, the above hasn’t used the dynamic quotient. Using it, we can derive that 1 = {(*e*^*h*– 1) //*h*, set*h*= 0}. Thus the latter expression cannot be simplified further but we don’t need to do so since we can determine that its value is 1. If we would wish so, we could use this (deduced) property to define*e*as well (“the formal approach”).

The key difference between COTP and above “approach of Cartesius” is that COTP shows how the (common) numerical development of *e *can be found. This method relies on the formula of the derivative, which Cartesius didn’t have (or didn’t want to adopt from Fermat).

In my email of March 27 2013 to Hessel Pot I explained how COTP differed from a particular Dutch textbook on the introduction of *e*.

- The textbook suggests that
*f*‘[0] = 1 would be an intuitive criterion. This is only partly true. - It proceeds in reworking
*f*‘[0] = 1 into a more general formula. (I didn’t mention unstated assumptions in 2013.) - It eventually boils down to indeed positing that
*e*^*x*has itself as its derivative, but this definition thus is not explicitly presented as a definition. The clarity of positing this is obscured by the path leading there. Thus, I feel that the approach in COTP is a small but actually key innovation to explicitly define*e*^*x*as being equal to its derivative. - It presents
*e*only with three decimals.

There are more ways to address the intuition for the exponential number, like the growth process or the surface area under 1 / *x*. Yet the above approaches are more fitting for the algebraic approach. Of these, COTP has a development that is strong and appealing. The plots by Cartesius and Pot are useful and supportive but no alternatives.

The **Appendix** contains a deduction that was done in the course of writing this weblog entry. It seems useful to include it, but it is not key to above argument.

The earlier weblog entry on Cartesius and Fermat used a circle and generated a “general formula” on a factor *x *– *a*. This is not really factoring, since the factor only holds when the curve lies on a circle.

Using the two relations:

*f*[*x*] – *f*[*a*] = (*x *– *a*) (2*u – x – a*) / (*f*[*x*] + *f*[*a*]) … (* general)

*u *= *a *+ *s **f*[*a*] … (for a tangent to a circle)

we can restate the earlier theorem that *s* defined in this manner generates the slope that is tangent to a circle.

*f*[*x*] – *f*[*a*] = (*x *– *a*) (2 *s f*[*a*]* – *(*x – a*)) / (*f*[*x*] + *f*[*a*])

It will be useful to switch to *x *–* a *= *h*:

*f*[*a* *+ h*] – *f*[*a*] = *h* (2 *s f*[*a*]* – h*) / (*f*[*a *+ *h*] + *f*[*a*])

Thus with the definition of the derivative via the dynamic quotient we have:

d*f */ dx = {Δ*f *// Δ*x, *set Δ*x *= 0}

= {(*f*[*a* *+ h*] – *f*[*a*]) // *h, *set *h *= 0}

= { (2 *s f*[*a*]* – h*) / (*f*[*a *+ *h*] + *f*[*a*])*, *set *h *= 0}

= *s*

This merely shows that the dynamic quotient restates the earlier theorem on the tangency of a line and circle for a curve.

This holds for any function and thus also for the exponential function. Now we have *s *= *e*^*a* by definition. For *e*^*x *this gives:

*e*^{a + h} – *e ^{a}* =

For COTP §12.1.8.3 we get, with Δ*x *= *h:*

d*f */ dx = {Δ*f *// Δ*x, *set Δ*x *= 0}

= {(*e*^{a + h} – *e ^{a}* ) //

= {(2 *s e ^{a}* –

= *s
*

This replaces Δ*f *// Δ*x* by the expression from the general formula, while the general formula was found by assuming a tangent circle, with *s *as the slope of the incline. There is the tricky aspect that we might choose any value of *s *as long as it satisfies *u = **a *+ *s **f*[*a*]. However, we can refer to the earlier discussion in §12.1.8.2 on the actual calculation.

The basic conclusion is that this “general formula” enhances the consistency of §12.1.8.3. The deduction however is not needed, since we have §12.1.8.1, but it is useful to see that this new elaboration doesn’t generate an inconsistency. In a way this new elaboration is distractive, since the conclusion that 1 = {(*e*^*h* – 1) // *h*, set *h *= 0} is much stronger.

]]>

Group theory creates different number systems, from natural numbers *N, *to integers *Z, *to rationals *Q, *to reals *R, *and complex plane *C, *and on to higher dimensions. For elementary and secondary education it is obviously useful to have the different subsets of *R. *But we don’t do group theory, for the notion of number is given by *R.*

It should be possible to agree on this (*):

- that
*N*⊂ Z ⊂*Q*⊂*R,* - that the elements in
*R*are called numbers, - whence the elements in the subsets are called numbers too.

Timothy Gowers has an exposition, though with some group theory , and thus we would do as much group theory as Gowers needs. There is also my book *Foundations of mathematics. A neoclassical approach to infinity* (FMNAI) (2015) (pdf online) so that highschool students need not be overly bothered by complexities of infinity. FMNAI namely distinguishes:

- potential infinity with the notion of a limit to infinity
- actual infinity created by abstraction, with the notion of “bijection by abstraction”.

There arises a conceptual knot. When *A *is a subset of *B, *or *A* ⊂ *B, *then saying that *x *is in *A *implies that it is in *B, *but not necessarily conversely. Who focuses on *A, *and forgets about *B, *may protest against a person who discusses *B. *When we say that the rational numbers are “numbers” *because* they are in *R, *then group theorists might protest that the rationals are “only” numbers *because* (1) *Q* is an extension of *Z *by including division, and (2) then we *decide* that these can be called “number” too. Group theorists who reason like this are advised to consider the dictum that “after climbing one can throw the ladder away”. In the real world there are points of view. When Putin took the Crimea, then his argument was that it already belonged to Russia, while others called it an annexation. In mathematics, it may be that mathematicians are people and have their own personal views. Yet above (*) should be acceptable.

It should suffice to adopt this approach for primary and secondary education. Research mathematicians are free to do what they want at the academia, but let they not meddle in this education.

The expression 1 / 2 represents both the *operation* of division and the resulting *number*. This is an example of the “procept“, the combination of process and concept.

The procept property of *y */ *x *is the cause of a lot of confusion. The issue has some complexity of itself and we need even more words to resolve the confusion. Wikipedia (a portal and no source) has separate entries for “*division*“, “*quotient*“, “*fraction*“, “*ratio*“, “*proportionality*“.

In my book *Conquest of the Plane *(COTP) (2011), p47-58, I gave a consistent nomenclature (pdf online):

“Ratio is the input of division. Number is the result of division, if it succeeds.” (COTP p51)

This is not a definition of number but a distinction between input and output of division. My suggestion is to use the word *(static) **quotient *also for the* form *with *numerator* *y *divided by *denominator* *x.*

(static) quotient[*y, x*] = *y / x*

This fits the use in calculus of “difference and differential quotients”. The form doesn’t have to use a bar. Also a computer statement Div[numerator *y,* denominator *x*] would be a quotient.

This suggestion differs a bit from another usage in which the quotient would be the *outcome* of the division process, potentially with a remainder. We saw this usage for the polynomials. This convention is not universal, see the use of “difference quotient”. However, if there would be confusion between *outcome* and *form,* then use “*static quotient*” for the *form.* This is in opposition to the *dynamic quotient *that is relevant for the derivative, as *Conquest of the Plane *shows.

Check also the notion of proportionality in COTP, page 77-78 with the notion of proportion space: {denominator *x*, numerator *y*}. Division as a process is a multidimensional notion. The wikipedia article (of today) on proportionality fits this exposition, remarkably with also a diagram of proportion space, with the denominator (cause) on the horizontal axis and the numerator (effect) on the vertical axis (instead of reversed), *as it should be* because of the difference quotient in calculus. In *Conquest of the Plane *there is also a vertical line at *x *= 1, where the numerators give our numbers (a.k.a. slope or tangent).

My nomenclature uses the quotient and the distinction in subsets of numbers, and I tend to avoid the word fraction because of apparent confusions that people have. When someone gives a potential confusing definition of fractions, my criticism doesn’t consist of providing a proper definition for fractions, but I point out the confusion, and then refer to the above.

Below, I will also refer to the suggestion by Pierre van Hiele (1973) to abolish fractions (i.e. what people call these), and I will mention a neat trick that provides a much better alternative.

Number means also *satisfying a* *standard form. *Thus “number” is not something mysterious but is a form, like the other forms, yet standardised.

For example, we have 2 / 4 = 1 / 2, yet 1 / 2 has the standard form of the rationals so that 2 / 4 needs to be simplified by eliminating common prime factors. The algebra of 2 / (2 2) = 1 / 2 can be seen as “rewriting the form”.

What the standard is, depends upon the context. We can do sums on natural numbers, integers, rationals, reals. In education students have to learn how to rewrite particular forms into a particular standard. Student need to know the standard forms, not the group theory about the subset of numbers they are working in.

The equality sign in *x *= *a* is ambiguous. Computer algebra tends to avoid ambiguity. For example in *Mathematica*: Set (=) vs Equal (==) vs (identically) SameQ (===). Doing computer algebra would help students to become more precise, compared to current textbooks. Learning is going from vague to precise.

The equality sign in highschool tends to mean “of equal value”, which is above “==”. But two expressions can only be of equal value when they represent the identically same value. Thus *x * == *a *would amount to *Num*[*x*] === *Num*[*a*]. The standard mathematical phrase is “*equivalence class*” for a number in whichever format, e.g. with the numerical value at the vertical position at line at *x *= 1 (also for the denominator 1).

The standard form takes one element of an “equivalence class” (depending upon the context of what numbers are on the table, e.g. 1 / 2 for the rationals and 0.5 for the reals). (See COTP p45-48 for issues of “approximation”.)

Multiplication is no procept. For multiplication there is a clear distinction between the operation 2 * 3 and the resulting number 6. When your teacher asks you to calculate 2 * 3 then the answer of 2 * 3 is correct but likely not accepted. The smart-aleck answer 2 * 3 = 3 * 2 is also correct, but then the context better be group theory.

It is a pity that group theory adopted the name “group theory”. My proposal for elementary school is to replace the complicated word “multiplication” by “group, grouping”. With 12 identical elements, you can make 4 groups of 3. (With identical elements this isn’t combinatorics.) See *A child wants nice and no mean numbers *(CWNN) (2015). If this use of “group, grouping” is confusing for group theory, then they better change to something like “generalised arithmetic”.

The world originally had the notion of number, like counting fingers or measuring distance, but then group theory hijacked the word, and assigned it with a generalised meaning, whence communication has become complicated. Their use of language might cause the need for the term *numerical value. *I would like to say that 2 is identically the same number in *N, Z, Q *and *R, *but group theorists tend to pedantically assert that the notion of number is relative to the set of axioms. In the Middle Ages, people didn’t know negative numbers, and they couldn’t even think about -2. Only by defining -2 as a number too, it could be included as a number. This sounds like Baron von Muenchhausen lifting himself from the swamp. The answer to this is rather that -2 is still a number even though it wasn’t recognised as this. I would like to insist that we use the term “number” for the numerical value in *R, *so that we can use the word “number” in elementary school in this safe sense. Group theorists then must invent a word of their own, e.g. “generalised number” or “gnumber”, for their systems.

Changing the meaning of words is like that your car is stolen, given another colour, and parked in front of your house as if it isn’t your car. Group theorists tend to focus on group theory. They tend not to look at didactics and teaching. When group theorists hear teachers speaking about numbers, and how 2 is the same number in *N *and *R, *then group theorists might smile arrogantly, for they “know better” that *N *and *R *are different number systems. This would be misplaced behaviour, for it are the group theorists themselves who hijacked the notion of number and changed its meaning. When research mathematicians have the idea that teachers of mathematics have no training about group theory, then they better read Richard Skemp (1971, 1975), *The psychology of learning mathematics, *first. This was written with an eye on teaching mathematics (and training teachers) and contains an extensive discussion of group theory. (Though I don’t need to agree with all that Skemp writes.)

Peter van ‘t Riet edited Vredenduin (1991) “*De geschiedenis van positief en negatief*“, Wolters-Noordhoff, on the history of positive and negative numbers. Van ‘t Riet allows himself a concluding observation:

“Kijken wij er achteraf op terug, dan kan een gevoel van verwondering opkomen, dat begrippen die ons zo vanzelfsprekend en helder lijken, zo’n lange ontwikkelingsgeschiedenis hebben gehad waarin vooruitgang, terugval en nieuwe vooruitgang elkaar afwisselden. Opmerkelijk is dat begrippen zich soms pas echt ontwikkelen als zij bevrijd worden van een dominerende idee die eeuwenlang hun ontwikkeling in de weg stond. Dat is bij de negatieve getallen het geval geweest met de geometrisering van de algebra: de gedachte dat getallen representanten waren van meetkundige grootheden is eeuwen achtereen een obstakel geweest teneinde tot een helder begrip van negatieve getallen te komen. Achteraf vraag men zich af: hoe was het mogelijk dat eeuwenlang deze idee de algebra bleef domineren?” (p121)

Since we sometimes check Google Translate for the fun ways of its expressions, it is nice to let the machine speak again:

“If we look afterwards back, then bring up a sense of wonder that concepts which seem to us so obvious and clear, have had such a long history in which progress, relapse and further progress alternating. Remarkably concepts sometimes only really develop as they freed from a dominant idea that for centuries had their development path that is in the negative numbers was the case with the geometrization of algebra:. the idea that numbers representatives were of geometric quantities is centuries successively been an obstacle in order to achieve a clear understanding of negative numbers retrospect one question himself:. how was it possible that for centuries the idea continued to dominate the algebra?” (Google Translate)

Just to be sure: analytic geometry has the number line with negative numbers too. Van ‘t Riet means the line section, that always has a nonnegative length.

A step to answering his question is that mathematicians focus on abstraction, whence they are more guided by their own concepts rather than by empirical applications or the observations in didactics. I included this quote in the hope that group theorists reading this will again grow aware of human folly, and realise that they should support empirical didactics and not block it.

More noise is generated by the different “number formats” that have been developed over the course of history. We have forms 2 + ½ = 2½ = 5 / 2 = 25 / 10 = 2.5 = 2 + 2^{-1} (neglecting the Egyptians and such). We should not forget that the decimals are actually also a form or result of division. Another example is 0.365 = 3 / 10 + 6 / 100 + 5 / 1000. Only the infinite decimals present a problem, since then we need an infinite series of divisions, yet this can be solved. The various formats have their uses, and thus education must teach students what these are.

An approach might be to only use numbers in decimal notation. However, the expression 1 / 3 is often easier than 0.33333…. Students must learn algebra. Compare 1 / 2 + 1 / 3 with 1 / *a* + 1 / *b.*

“But to understand algebra without ever really understood arithmetic is an impossibility, for much of the algebra we learn at school is a generalized arithmetic. Since many pupils learn to do the manipulations of arithmetic with a very imperfect understanding of the underlying principles, it is small wonder that mathematics remain a closed book to them.” (Skemp, p35)

The KNAW 2009 study on arithmetic education and its evidence and research is invalid. It forgot that pupils in elementary school have to learn particular algorithms in arithmetic in preparation for algebra in secondary education. It scored answers to sums as true / false and didn’t assign points to the intermediate steps, so that pupils who used trial and error also had the option to score well. In a 2011 thesis on the psychometrics of arithmetic, the word “algebra” isn’t mentioned, and various of its research results are invalid. There is a rather big Dutch drama on failure of education on arithmetic, failure of supervision, and breaches of integrity of science.

Irrational numbers started as a ratio. Consider a triangle with perpendicular sides 1 and then consider the ratio of the hypothenuse to one of those sides. The input √2 : 1 reduces to number √2.

There are students who do 2 + ½ = 2½ = 2 ½ = 1, because in handwriting there might appear to be a space that indicates multiplication, compare 2*a *or 2√2 or 2 km where such a space can be inserted without problem. See the earlier weblog text how Jan van de Craats tortures students. A proposal of mine since 2008 is to use 2 + ½ and stop using 2½.

Yesterday I discovered Poisard & Barton (2007) who compare the teaching of fractions in France and New Zealand, and who also advise 2 + ½. The German wikipedia has also a comment on the confusing notation of 2½. I haven’t looked at the thesis by Rollnik yet.

For a *standard form* for the rationals, the rules are targeted at facilitating the location on the number line, while we distinguish the operation *minus *from the *sign* of a negative number (as -2 = negative 2).

- If a rational number is equal to an integer, it is written as this integer, and otherwise:
- The rational number is written as an integer
*plus*or*minus*a quotient of natural numbers. - The integer part is not written when it is 0, unless the quotient part is 0 too (and then the whole is the integer 0).
- The quotient part has a denominator that isn’t 0 or 1.
- The quotient part is not written when the numerator is 0 (and then the whole is an integer).
- The quotient part consists of a quotient (form) with an (absolute) value smaller than 1.
- The quotient part is simplified by elimination of common primes.
- When the integer part is 0 then
*plus*is not written and*minus*is transformed into the negative sign written before the quotient part. - When the integer part is nonzero then there is
*plus*or*minus*for the quotient part in the same direction as the sign of the integer part (reasoning in the same direction).

Thus (- 2 – ½) = (-3 + ½) but only the first is the standard form.

PM 1. Mathematica has the standard form 5 / 2. *Conquest of the Plane *p54 provides the routine RationalHold[*expr*] that puts all Rational[*x, y*] in *expr* into HoldForm[IntegerPart[*expr*] + FractionalPart[*expr*]].

PM 2. Digits are combined into numbers, so that we don’t have 28 = 2 * 8 = 16 = 6. Nice is:

“For example, 7 (4 + *a*) is equal to 28 + 7*a* and no 74 + 7*a*.” (Skemp, p230)

A new suggestion is to use *H *= -1. Then we get 2 + ½ = 2 + 2* ^{H}*= 5 2

Above quotient form then becomes (*y x ^{H}*) and the dynamic quotient (

There are students who struggle with *a *– (-*b*) = *a *– (-1) *b,* perhaps because subtraction actually is a form of multiplication. Curiously, this is another issue of inversion that is made easier by using *H,* with *a *– (-*b*) = *a *– *H* *b = a + H H b = a *+* b. *See the last weblog entry that division is repeated subtraction. The only requirement is that each number has also an inverse, zero excluded, so that these inverses can be subtracted too. For example 4 3* ^{H}* = (3 + 1) 3

4 – (1 + 3* ^{H}*) – (1 + 3

The last weblog entry on group theory showed that group theory concentrates on numbers, whence it (cowardly) avoids the perils of education on the various number formats.

Group theory mathematicians will tend to say that 1 / 2 = 2 / 4 = 50 / 100 = .. .are all member of the same “*equivalence class*” of the number 1 / 2, whence their formats are no longer interesting and can be neglected.

In itself it is a laudable achievement that mathematics has developed a framework that starts with the natural numbers, extends with negative integers, develops the rationals, and finally creates the reals (and then more dimensions). This construction comes along with algorithms, so that we know what works and what doesn’t work for what kind of number. For example, there are useful prime numbers, that help for simplifying rationals. For example 3 * (1 / 3) = 1 whence 3 * 0.3333… = 0.9999… = 1.000… = 1. (Thus the decimal representation is not quite unique, and this is another reason to keep on using rational formats (when possible).)

When these group theory research mathematicians design a training course for aspiring teachers of mathematics, they tend to put most emphasis on group theory, and forget about the various number formats. This has the consequences:

- Teachers from their training become deficient in knowledge about number formats (e.g. Timothy Gowers’s article), even though those are more relevant to teachers because these are relevant for their students.
- There is also conditioning for a future lack of knowledge. The aspiring teachers are trained on abstraction and they will tend to grow blind on the problems that students have when dealing with the various formats.
- All this supports the delusion:

“We should teach group theory so that the students will have less problems with the algebra w.r.t. the various number formats. (For, they can neglect much algebra, like we do, since most forms are all in the same equivalence classes.)” (No quote)

Bas Edixhoven (Leiden) is chair of the executive board of Mastermath, a joint Dutch universities effort for the academic education of mathematicians. They also do remedial teaching for students who want to enroll into the regular training for teacher of mathematics but who have deficiencies in terms of mathematics. Think about a biologist who wants to become a teacher of mathematics. For those students the background in empirical science is important, because didactics is an empirical science too. Such students are an asset to education, and they should not be scared away by treating them as if they want to become research mathematicians. Obviously there are high standards of mathematical competence, but this standard is not the same as for doing research in mathematics.

- The “Foundations” syllabus for remedial teaching 2015 written by Edixhoven indeed
*looks at group theory with the neglect of number formats.*The term “fraction” (Dutch “breuk”) is used without definition, while there is also the expression “fraction form” (Dutch “breukvorm”). I get the impression that Edixhoven uses*fraction*and*fraction format*as identical. Perhaps he means the*procept*? The fractions are not the rationals since apparently π / 2 has a fractional form too. - At a KNAW conference in 2014 on the education of arithmetic Edixhoven presented standard group theory, presumably thinking that his audience had never heard about it and hadn’t already decided that its role for non-university education is limited. Edixhoven insulted his audience (including me) by not first studying what didacticians like Skemp had already said before about group theory in education.

I find it quite bizarre that mathematics courses at university for training aspiring teachers would neglect the number formats and treat these (remedial) student-teachers as if they want to become research mathematicians. Obviously I cannot really judge on this since I am no research mathematician so that I don’t know what it takes to become one. I only know that I have a serious dislike of it. Yet, the group theory taught is out of focus for what would be helpful for mathematics for teaching mathematics.

PM 1. The Edixhoven 2014 approach at KNAW fits Van Hiele (1973) who also suggests to have a bit of group theory in highschool. Yet, there is the drawback of confusion about the power -1 that students might read as subtraction. I would agree on this idea of having some group theory, but with the use of *H *= -1 and not without it. Let us first introduce the universal constant *H *= -1, thus also in elementary school where pupils should learn about division, and then proceed with some group theory in junior highschool.

PM 2. Edixhoven wrote this “Foundations” syllabus together with Theo van den Bogaard who wrote his thesis with Edixhoven. Van den Bogaard has only a few years of experience as teacher of mathematics. Van den Bogaard was secretary of a commission cTWO that redesigned mathematics education in Holland, with a curious idea about “mathematical think activities” (MTA). Van den Bogaard has an official position as trainer of teachers of mathematics but failed to see the error by the psychometrians in the KNAW 2009 study on education on arithmetic. I informed him about my comments on cTWO, MTA and KNAW 2009 but he didn’t respond. Now there is the additional issue of this curious “Foundations” syllabus. Four counts down on didactics and still training aspiring teachers.

These and other considerations caused me to write this letter to Mastermath.

The following indicates that research mathematicians can have their own subgroups or individuals who meddle with education. None is qualified for education, and one wonders whether they can keep each other in check.

Research mathematicians may develop a passion for education and interfere in education, and then start to invent their own interpretations, and then teach those to elementary schools and their aspiring teachers. These mathematicians are not qualified for primary education and apparently think that elementary school allows loose standards (since they can observe errors indeed). Then we get the blind (research mathematicians) helping the deaf (elementary school teachers), but the blind can also be arrogant, and lead the two of them into the abyss.

A September 2015 protest concerned Jan van de Craats, now emeritus at UvA. For the topic of division, his name pops up again. In this lecture on fractions for a workshop of 2010 for primary education Van de Craats for example argues as follows (my translation). It is unfair to have criticism on this since *these are only sheets*. Yet, even sheets should have a consistent set of definitions behind them. These sheets contribute to confusion. Remember that I didn’t give a definition of “fraction”, and that I propose an abolition of what many people apparently call “fraction”.

- Sheet 3:
*“Three sorts of numbers: integers, decimals, fractions”.*

(a) The main problem is the word “sort”. If he merely means “form” (with the decimals as the standard form that gives “the” number) then this is okay, but if he means that there are really differences (as in group theory) then this is problematic. A professor of mathematics should try to be accurate, and I don’t see why Van de Craats regards “sorts of” as accurate.

(b) If he identifies fractions with the rationals (but see sheet 26) then we might agree that*Z*⊂*Q*⊂*R,*though there are group theorists who argue that these are different number systems, and it is not clear whether Van de Craats would ask the group theorists not to meddle in education as he himself is doing.

(c) My answer: for education it seems best to stick to “various forms, one number (for standard form)”. - Sheet 30:
*“A fraction is the outcome of a division.”*

(a) As fraction is a number (Sheet 3), presumable 8 : 4 → 4 / 2 might be acceptable: (i) It is an outcome, (ii) the answer is numerically correct (as it belongs to the equivalence class), (iii) there is no requirement on a standard form (here).

(b)This doesn’t imply the converse, that the outcome of a division is always a fraction. Then it is either an integer (but then also a fraction (Sheet 25)) or decimal (but then also fraction (Sheet 26)). Thus fraction iff outcome from division.

(c) PM. My definition was: “Ratio is the input of division. Number is the result of division, if it succeeds.” (COTP p51), which doesn’t define number but distinguishes input and output. - Sheet 8:
*“Cito doesn’t test (mixed) fractions anymore in the primary school final examination.”*As an observation this might be correct, but if Van de Craats had had proper background in didactics, then he should have been able to spot the error by the psychometricians in the KNAW 2009 report, which should have been sufficient to effect change, instead of setting up this “course in fractions” (that he isn’t qualified for). - Sheet 18:
*Pizza model.*Didactics shows that students find this difficult. Use a rectangle. - Sheet 25:
*“Integers are also fractions (with denominator 1).”*On form, students must know the difference between integers and fractions (whatever those might be, see Sheet 30). The answer of (3 – 1) / (2 – 1) = ? better be 2 and not 2 / 1 because the latter can be simplified. - Sheet 26:
*“Decimals are also fractions.”*Thus*fractions are not the rational numbers.*The example is that √2 is irrational,*also in decimal expansion*(a “fraction”). Van de Craats apparently holds fractions and the decimals as identical, only written in different form. Thus also an infinite sum of fractions still is a fraction. A fraction is not just the form of the quotient as defined in*Conquest of the Plane*and above (though perhaps it can be written like this ?). - Sheet 27:
*“However, not all fractions are also decimals.”*This is a mystery. There are only three “sorts of” numbers, and w.r.t. Sheet 30 we found that fraction iff division, and all numbers should be divisible by 1. Also, the real numbers contain all numbers we have seen till now (not the complex numbers). Thus there would be phenomena called “fractions” (but still numbers, not algebra) not in the reals ? It cannot be 0 / 0 since the latter would be a result that cannot be accepted. Division 0 : 0 might be a proper question with the answer that the result is undefined. Perhaps he means to say that “1 / 2” doesn’t have the form of “0.5”, and that the expressions differ ? But then we are speaking about form again, and Van de Craats spoke about “sorts of numbers” and not about “same numbers with different forms”. - Sheet 28:
*“This course doesn’t offer an one-to-one-model for discussion at school.”*It sounds modest but I don’t know what this means. Perhaps he means that the sheets aren’t a textbook. - Sheet 30:
*“A fraction is the outcome of a division.”*(I moved this up.) - Sheet 33:
*“4 : 7 = 4 / 7”.*Apparently the ” : ” stands for the operation of division and “4 / 7” for the result. Apparently Van de Craats wants to get rid of the procept. The equality sign cannot mean identically the same, because otherwise there would be no difference between input and output. Is only 4 / 7 the right answer or is 8 / 14 allowed too ? Perhaps one can teach students that 4 : 7 is a proper question and that 8 / 14 is unacceptable since this must be 4 / 7. However, 4 : 1 would be a proper question too, and then Van de Craats also argues that 4 / 1 would be a fraction (and result of division). - Sheet 65:
*“Actually 2 4/5 means 2 + 4/5.”*(Van de Craats read an article of mine.) It would have been better if he stated that the first is a horrible convention, and that he proceeded with the second. He calls the form a “mixed fraction” while the English has “mixed number“. Lawyers might have to decide whether “fractions are numbers” implies that a “mixed fraction” is also a “mixed number”.

If a professor of mathematics becomes confused on such an “elementary (school)” issue of fractions (I still don’t know that is meant by this), why would the student believe that anyone can master this apparently superhumanly difficult subject ?

Would research mathematicians who do group theory be able to correct Van de Craats ?

Let us consider Bas Edixhoven again, see again his sheets.

Or would Edixhoven argue that he himself looks at natural numbers, integers, rationals and reals, so that he has no view on “fractions”, as apparently defined by Van de Craats ? Though the “Foundations” syllabus refers to the word without definition and Edixhoven might presume that aspiring teachers of mathematics know what those fractions are.

Edixhoven in the 2014 lecture only suggests that there better be more proofs and axiomatics in the highschool programme, and he gives the example of a bit of group theory for arithmetic. He also explains modestly that he speaks “from his own ivory tower” (quote). Thus we can only infer that Edixhoven will remain in this ivory tower and will not stop the blind (but also arrogant) Van de Craats from leading (or at least trying to lead) the deaf (elementary school teachers) into the abyss.

However, professor Edixhoven also left the ivory tower and and joined the real world. At Mastermath he is involved in training aspiring teachers. Since February 2015 he is member of the Scientific Advisory Board of the mathematics department of the University of Amsterdam, where professor Van de Craats still has his homepage with this confusing “course on fractions”. I informed this board in Autumn 2015 about the problematic situation that Van de Craats propounds on primary and secondary education but is not qualified for this. I have seen no correction yet. Apparently Edixhoven doesn’t care or is too busy scaring aspiring teachers away. Apparently, when a teacher of mathematics criticises him, then this teacher obviously must be deficient in mathematics, and should follow a course for due indoctrination in the neglect of didactics of mathematics.

]]>

The change from ring *Z* to field *R* is not quite the inclusion of division – since the ring already has *implied division* namely as* repeated subtraction* – but the change consists of extending the set with “accepted numbers” by inverse elements *x ^{H} *for

If the ring has variables and expressions, then we can form the expression 1 = 2* z*, and we effectively have z = 2* ^{H}*, and then we might wonder whether it actually matters much whether this

Part of the confusion in this discussion is caused by that we might regard 2* ^{H}* as the

The discussion within group theory might be a victim of the phenomenon of the procept. When the discussion is confused, perhaps group theory itself is confused. We should get enhanced clarity by removing the ambiguity of operation and result, but perhaps textbooks then become thicker.

Subsequently, we get a distinction between:

*Mathematics for which group theory isn’t so relevant*– such that there is a logical sequence from natural numbers to integers, to rationals, to reals, to multidimensional reals, for, all is implied by logic and algebra, and only the end result matters,*Mathematics for models for which group theory is relevan**t*– i.e. for models for which it is crucial that e.g.*Z*has no*z*such that 1 = 2*z.*The crux lies in the elements of the sets, as the operations themselves are actually implied.

A model might be the number of people. Take an empty building. A biologist, physicist and mathematician watch the events. Two people enter the building, and some time later three people leave the building. The biologist says: “They have reproduced.” The physicist says: “There was a quantum fluctuation.” The mathematician says: “There is -1 person in the bulding.”

The following develops the example of implied division. This discussion has been inspired by both the recent discussion of the “ring of polynomials” (thus without division but still with divisor and remainder) and the observation that “realistic mathematics education” (RME) allows students to avoid long division and allows “partial quotients” (repeated subtraction).

*Z* rewrites repeated addition 3 + 3 + 3 + 3 = 12 as multiplication 4 * 3 = 12.

*Z* allows the converse 12 – 3 – 3 – 3 – 3 = 0 and also the expression 12 – 4 * 3 = 0.

*Z* doesn’t allow the rewrite of the latter into 12 / 4 = 3.

Yet 12 – 4 * 3 = 0 gives the notion of “implied division”, namely, find the *z *such that 12 – 4 * *z* = 0.

This notion of “implied division” is well defined, but the only problem is that we cannot find a number *z *in *Z *that satisfies 1 – 2*z* = 0.

If we extend *Z *with basic elements *n ^{H}* for

The following discusses this with formulas.

Multiplication is repeated addition. The ring of integers has the notion of subtraction. Define “implied division” of *y *by *x *as the repeated subtraction from *y* of some quantity *z, *for *x *times with remainder 0. For *x* ≠ 0:

*y – **x* *z* = 0 * * (* definition)

To refer to this property, we use abstract symbol *H, *though we later use *H *= -1.

*x ^{H} y = *z ⇔

For *x *itself:

*x ^{H} x *=

We have 0 *z* = 0 for all *z *in the *ring*. Then for implied division by zero we have:

*y – *0 *z* = 0 ⇒ * **y* = 0

* *As above, for *y *= 0:

0* – *0 *z* = 0 * *for any *z *

0* ^{H} *0 =

Thus the rule is: For implied division *within the ring*, the denominator cannot be 0, unless the numerator is 0 too, in which case any number would satisfy the equation.

This is not necessarily “infinity” or “undefined” but rather “any *z* in *Z*“. The solution set is equal to *Z *itself. There is a difference between functions (only one answer) and correspondences (more answers).

A ring is commonly turned into a field by including the normal definition of division:

* x *≠ 0 ⇒ * * * x ^{H} x *=

With this definition we get (multiplying left or right):

*x ^{H} y *=

The curious observation is that a definition of division seems superfluous, since we already have implied division. The operation (*) already exists within the ring. We included a special notation for it, but this should not distract from this basic observation. If you have a left foot then it doesn’t matter whether you call it George or Harry.

The natural numbers can be factored into prime numbers. When we solve 6 / 3 = 2, then we mean that 6 can be factored as 2 times 3, and that we can eliminate the common factor.

6 / 3 = *z * ⇔ 6 = 3 *z * ⇔ 2 3 = 3 *z * ⇔ 3 (2 – *z*) = 0 ⇔

3 = 0 or (2 – *z*) = 0

But, again, this algorithm doesn’t work for a case like 1 = 2 *z.*

Let us consider the implied division of 1 by 2. This generates:

2* ^{H} *1 =

*2 ^{H} *=

1 = 2 z

Thus we don’t actually need to know what this *z *is, since we have the relevant expressions to deal with it.

The point is: when we run through all elements in *Z* = { … -3, -2, -1, 0, 1, 2, 3, … }, then we can prove that none of these satisfies 1 = 2 *z*.

Thus the core of group theory are the elements of the sets, and less the operations, since these are implied.

The basic notion is that 0 has successor 1 = *s*[0], and so on, and this gives us *N*. That 0 is a predecessor of *s*[0] generates the idea of inversion that *s*[*H*] = 0. This gives us *Z*. Addition leads to subtraction, to multiplication, to division. The core of addition doesn’t change, only the “numbers”.

Thus, group theory might have a confusing language that focuses on the *operations,* while the actual discussion is about the *numbers* (since the operations are already available and implied).

Thus, once we accept algebra, then the real numbers can be developed logically, and it is a bit silly to speak about “group theory”, since there are only steps, and all is implied. It only makes sense for applications to models, such as the notion that there aren’t half people and such.

It remains relevant that some algorithms may only apply to some domains and not others. Factoring natural numbers into prime numbers still works for the natural numbers embedded in the reals, yet, it is not clear whether such a notion of factoring would be relevant for other real numbers.

We might consider to include the element 0* ^{H} *in the ring, to create 〈ring, 0

(1) If we maintain that 0 *z* = 0 for all *z *in 〈ring, 0* ^{H}*〉 then:

0* ^{H} *0 = 0

Observe that this is not a deduction, but a definition that 0 *z* = 0 for all *z*.

One viewpoint is that there is a conflict between “any *z*” and “only *z *= 0″ so that we cannot adopt this definition. Another viewpoint is that the latter uses the freedom of the former.

(2) When we write 0* ^{H} as ∞ *then it might be clearer that 0

If we create the 〈ring, 0* ^{H}*〉, then we might also hold: 0

0* ^{H} *0 =

(3) An option is to slightly revise the definition as repeated subtraction by *z* until the remainder equals that very quantity *z *again. Thus:

*y – *(*x *– 1) *z* = *z * (*** definition 2)

*x ^{H} y *=

For *x *= 0 we would now use *z – z *= 0 which might be less controversial.

0* ^{H} y *=

*y* = *z* – *z* = 0

0* ^{H} y *= 0

However, the more common approach is that 0* ^{H} *is

PM. See also the earlier discussion on this weblog.

]]>

“One could claim that, just as the history of Western philosophy has been viewed as a series of footnotes to Plato, so the past 350 years of mathematics can be viewed as a series of footnotes to Descartes’ *Geometry.*” (Grabiner) (But remember Michel Onfray‘s observation that followers of Plato have been destroying texts by opponents. (Dutch readers check here.))

Both Cartesius and Fermat were involved in the early development of calculus. Both worked on the algebraic approach without limits. Cartesius developed the *method of normals *and Fermat the *method of adequality.*

Fermat’s method was algebraic itself, but later has been developed into the method of limits anyhow. When asked what the slope of a ray *y *= *s x *is at the point *x *= 0, then the answer *y* /* x *= *s *runs into problems, since we cannot use 0 / 0. The conventional answer is to use limits. This problem is more striking when one considers the special ray that is defined everywhere except at the origin itself. The crux of the problem lies in the notion of *slope *Δ*f */ Δ*x *that obviously has a problematic division. With set theory we can now define the “dynamic quotient”, so that we can use Δ*f *// Δ*x = **s *even when Δ*x* = 0, so that Fermat’s problem is resolved, and his algebraic approach can be maintained. This originated in 2007, see *Conquest of the Plane *(2011).

Cartesius followed Euclid’s notion of *tangency. *Scholars tend to assign this notion to Cartesius as well, since he embedded the approach within his new idea of analytic geometry.

I thank Roy Smith for this eye-opening question:

“Who first defined a tangent to a circle as a line meeting it only once? From googling, it seems commonly believed that Euclid did this, but it seems nowhere in Euclid does he even state this property of a tangent line explicitly. Rather Euclid gives 4 other equivalent properties, that the line does not cross the circle, that it is perpendicular to the radius, that is a limit of secant lines, and that it makes an angle of zero with the circle, the first of which is his definition, the others being in Proposition III.16. I am wondering where the “meets only once” definition got started. I presume once it got going, and people stopped reading Euclid, (which seems to have occurred over 100 years ago), the currently popular definition took over. Perhaps I should consult Legendre or Hadamard? Thank you for any leads.” (Roy Smith, at StackExchange)

In this notion of tangency there is no problematic division, whence there is no urgency to use limits.

The reasoning is:

- (Circle & Line) A line is tangent to a circle when there is only one common point (or the two intersecting points overlap).
- (Circle & Curve) A smooth curve is tangent to a circle when the two intersecting points overlap (but the curve might cross the circle at that point so that the notion of “two points” is even more abstract).
- (Curve & Line) A curve is tangent to a line when the above two properties hold (but the line might cross the curve, whence we better speak about
*incline*rather than*tangent*).

Consider the line *y *= *f*[*x*] = *c *+ *s x *and the point {*a, **f*[*a*]}. The line can also be written with *c *= *f*[*a*] – *s **a*:

*y – f*[*a*] = *s *(*x *– *a*)

The normal has slope –*s ^{H}, *where we use

0* – f*[*a*] = –*s ^{H}* (

*s *= (*u – a*) / *f*[*a*]

*u *= *a *+ *s f*[*a*]

The circle has the formula (*x *– *u*)² + y² = *r*². Substituting {*a, **f*[*a*]} generates the value for the radius *r*² = (*a *– (*a *+ *s f*[*a*]))² + * f*[*a*]² = (1 + *s*²) *f*[*a*]² . The following diagram has {*c, **s, a*} = {0, 2, 3} and thus *u *= 15 and *r * = 6√5.

For the method of normals and arbitrary function *f*[*x*], Cartesius’s trick is to substitute *y *= *f*[*x*] into the formula for the circle, and then solve for the unknown center of the circle.

(*x *– *u*)² + (*y – *0)² = *r*²

(*x *– *u*)² + *f*[*x*]² – *r*² = 0 … (* circle)

This expression is only true for *x *= *a, *but we treat it as if it were more general. The key property is:

Since {*a*, *f*[*a*]} satisfies the circle, this equation has a solution for *x *= *a *with a double root.

Thus there might be some *g *such that the root can be isolated:

(*x *– *a*)² *g *[*x, u*] = 0 … (* roots)

Thus, if we succeed in rewriting the formula for the circle into the form of the formula with the two roots, then we can use information about the structure of the latter to say something about *u.*

The method works for polynomials, that obviously have roots, but not necessarily for trigonometry and the exponential function.

The algorithm thus is: (1) Substitute *f*[*x*] in the formula for the circle. (2) Compare with the expression with the double root. (3) Derive *u. *(4) Then the line through {*a*, *f*[*a*]} and {*u, *0} will give slope –*s ^{H}. *Thus

Consider the line *y *= *f*[*x*] = *c *+ *s x *again. Let us apply the algorithm. The formula for the circle gives:

(*x *– *u*)² + (*c *+ *s x*)² – *r*² = 0

*x*² – 2*ux *+ *u² + **c*² + 2*csx *+ *s*²*x**² – r*² = 0

(1 + *s**²) x*² – 2 (*u *– *cs*) *x + u² + c² – r² = 0*

This is a polynomial. It suffices to choose *g *[*x, u*] = 1 + *s**² *so that the coefficients of *x² *are the same. Also the coefficient of *x *must be the same. Thus expanding (*x* – *a*)²:

(1 + *s**²) *(*x*² – 2*ax + a*²) = 0

– 2 (*u *– *cs*) = -2 *a* (1 +* s²*)

*u *= *a* (1 +* s²*) *+ cs *= *a *+ *s *(*c *+ *s**a*) = *a *+ *s f*[*a*]

which is the same result as above.

We can deduce a general form that may be useful on occasion. When we substitute the point {*a*, *f*[*a*]} into the formula for the circle, then we can find *r, *and actually eliminate it.

(*x *– *u*)² + *f*[*x*]² = *r*² = (*a *– *u*)² + *f*[*a*]²

*f*[*x*]² – *f*[*a*]² = (*a *– *u*)² – (*x *– *u*)²

(*f*[*x*] – *f*[*a*]) (*f*[*x*] + *f*[*a*]) = ((*a *– *u*) – (*x *– *u*)) ((*a *– *u*) + (*x *– *u*))

(*f*[*x*] – *f*[*a*]) (*f*[*x*] + *f*[*a*]) = (*a *– *x*) (*a *+ *x *– *2**u*)

*f*[*x*] – *f*[*a*] = (*a *– *x*) (*a *+ *x *– *2**u*) / (*f*[*x*] + *f*[*a*])

*f*[*x*] – *f*[*a*] = (*x *– *a*) (*2**u – x – a*) / (*f*[*x*] + *f*[*a*]) … (* general)

*f*[*x*] – *f*[*a*] = (*x *– *a*) *q*[*x, a, u*]

We cannot do much with this, since this is basically only true for *x *= *a *and *f*[*x*] – *f*[a] = 0. Yet we have this “branch cut”:

(1) *q*[*x, a, u*] = *f*[*x*] – *f*[*a*] / (*a *– *x*) if *x* ≠ *a*

(2) *q*[*a, a, u*] * * potentially found by other means

If it is possible to “simplify” (1) into another expression Simplify[*q*[*x, a, u*]] without the division, then the tantalising question becomes whether we can “simply” substitute *x *= *a. *Or, if we were to find *q*[*a, a, u*] via other means in (2), whether it links up with (1). These are questions of continuity, and those are traditionally studied by means of limits.

We can still use the general formula to state a theorem.

**Theorem.** If we can eliminate factors without division, then there is an expression *q*[*x, a, u*] such that evaluation at *x* = *a* gives the slope *s* of the line, or *q*[*a, a, u*] = *s, *such that at this point both curve and line are touching the same circle.

Proof. Eliminating factors without division in above general formula gives:

*q*[*x, a, u*] = (*2**u – x – a*) / (*f*[*x*] + *f*[*a*])

Setting *x *= *a *gives:

*q*[*a, a, u*] = (*u – a*) / *f*[*a*]

And the above *s *= (*u – a*) / *f*[*a*] implies that *q*[*a, a, u*] = *s*. QED

This theorem gives us the general form of the incline (tangent).

*y*[*x, a, u*] *= *(*x – a*) *q*[*a, a, u*] + *f*[*a*] … (* incline)

*y*[*x, a, u*] *= *(*x – a*) (*u – a*) / *f*[*a*] + *f*[*a*]

PM. Dynamic division satisfies the condition “without division” in the theorem. For, the term “division” in the theorem concerns the standard notion of static division.

Polynomials are the showcase. For polynomials *p*[*x*], there is the polynomial remainder theorem:

When a polynomial *p*[*x*] is divided by (*x *– *a*) then the remainder is *p*[*a*].

(Also, *x* – *a* is called a “divisor” of the polynomial if and only if *p*[*a*] = 0.)

Using this property we now have a dedicated proof for the particular case of polynomials.

**Corollary.** For polynomials *q*[*a*] = *s,* with no need for *u*.

Proof. Now, *p*[*x*] – *p*[a] = 0 implies that *x *– *a *is a root, and then there is a “quotient” polynomial *q*[*x*] such that:

*p*[*x*] – *p*[a] = (*x *– *a*) *q*[*x*]

From the general theorem we also have:

*p*[*x*] – *p*[*a*] = (*x *– *a*) *q*[*x, a, u*]

Eliminating the common factor (*x – a*) without division and then setting *x *= *a *gives *q*[*a*] = *q*[*a, a, u*] = *s*. QED

We now have a sound explanation why this polynomial property gives us the *slope *of the polynomial at that point. The slope is given by the incline (tangent), and it must also be slope of the polynomial because of the mutual touching of the same circle.

See the earlier discussion about techniques to eliminate factors of polynomials without division. We have seen a new technique here: comparing the coefficients of factors.

Since *q*[*x*] is a polynomial too, we can apply the polynomial remainder theorem again, and thus we have *q*[*x*] = (*x *– *a*) *w*[*x*] + *q*[*a*] for some *w*[*x*]. Thus we can write:

*p*[*x*] = (*x *– *a*) *q*[*x*] + *p*[*a*]

*p*[*x*] = (*x *– *a*) ( (*x – a*)* w*[*x*] + *q*[*a*] ) + *p*[*a*] … (* Ruffini’s Rule twice)

*p*[*x*] = (*x *– *a*)²* w*[*x*] + (*x – a*) *q*[*a*] + *p*[*a*] … (* Range’s proof)

*p*[*x*] = (*x *– *a*)²* w*[*x*] + *y*[*x, a*] … (* with incline)

We see two properties:

- The repeated application of Ruffini’s Rule uses the indicated relation to find both
*s*=*q*[*a*] and constant*f*[*a*], as we have seen in last discussion. - Evaluating
*f*[*x*] / (*x*–*a*)² gives the remainder*y*[*x, a*], which is the formula for the incline.

Michael Range proves *q*[*a*] = *s *as follows (in this article (p406) or book (p32)). Take above (*) and determine the error by substracting the line *y* = *s* (*x *– *a*) + *p*[*a*] :

*error* = *p*[*x*] – *y *= (*x *– *a*)²* w*[*x*] + (*x – a*) *q*[*a*] – *s* (*x *– *a*)

= (*x *– *a*)²* w*[*x*] + (*x – a*) (*q*[*a*] – *s*)

The error = 0 has a root *x* = *a* with multiplicity greater than one if and only if *s *= *q*[*a*].

Now that we have established this theory, there may be no need to refer to the circle explicitly. It can suffice to use the property of the double root. Michael Range (2014) gives the example of the incline (tangent) at *x**² *at {*a, **a*²}. The formula for the incline is:

*f*[*x*] – *f*[*a*] = *s *(*x – a*)

*x**² * – *a**² *– *s *(*x – a*) = 0

(*x – a*) (*x *+ *a *– *s*) = 0

There is only a double root or (*x* – *a*)² when *s *= 2*a. *

Working directly on the line allows us to focus on *s, *and we don’t need to determine *q*[*x*] and plug in *x *= *a.*

Michael Range (2011) clarifies – with thanks to a referee – that the “point-slope” form of a line was introduced by Gaspard Monge (1746-1818), and that Descartes apparently did not think about this himself and thus neither to plug in *y *= *f *[*x*] here. However, observe that we only can maintain that there must be a double root on this line form too, since {*a*, *f*[*a*]} still lies on a tangent circle.

[Addendum 2017-01-10: The later argument in a subsequent weblog entry becomes: If the function can be factored twice, then there is no need to refer to the circle. But when this would be equivalent to the circle then such a distinction is immaterial.]

When a circle touches a curve, it still remains possible that the curve crosses the circle. The original idea of two points merging together into an overlapping point then doesn’t apply anymore, since there is only one intersecting point on either side if the circle were smaller or bigger.

An example is the spline function *g*[*x*] = {If *x *< 0 then 4 – *x*² / 4 else 4 + *x*² / 4}. This function is C1 continuous at 0, meaning that the sections meet and that the slopes of the two sections are equal at 0, while the second and higher derivatives differ. The circle with center at {0, 0} and radius 4 still fits the point {0, 4}, and the incline is the line *y *= 4.

An application of above algorithm would look at the sections separately and paste the results together. Thus this might not be the most useful example of crossing.

In this example there might be no clear two “overlapping” points. However, observe:

- Lines through {0, 4} might have three points with the curve, so that the incline might be seen as having three overlapping points.
- Points on the circle can always be seen as the double root solutions for tangency at that point.

There is still quite a conceptual distance between (i) the story about the two overlapping points on the circle and (ii) the condition of double roots in the error between line and polynomial.

The proof given by Range uses the double root to infer the slope of the incline. This is mathematically fine, but this deduction doesn’t contain a direct concept that identifies *q*[*a*] as the slope of an incline (tangent): it might be any line.

We see this distinction between concept and algorithm also in the direct application to Monge’s point-slope formulation of the line. Requiring a double root works, but we can only do so because we know about the theory about the tangent circle.

The combination of circle and line remains the fundamental reason why there are two roots. Thus the more general proof given above, that reasons from the circle and unpacks *f*[*x*]² – *f*[*a*]² into the conditions for incline and its normal, is conceptually more attractive. I am new to this topic and don’t know whether there are references for this general proof.

(1) We now understand where the double root comes from. See the earlier discussion on polynomials, Ruffini’s rule and the meaning of division (see the section on “method 2”).

(2) There, we referred to polynomial division, with the comment: “Remarkably, the method presumes *x **≠ a, *and still derives *q*[*a*]. I cannot avoid the impression that this method still has a conceptual hole.” However, we now observe that we can compare the values of the coefficients of the powers of *x*, whence we can avoid also polynomial division.

(3) There, we had a problem that developing *p*[*x*] = (*x *– *a*)² *w*[*x*] + *y*[*x, a*] didn’t have a notion of tangency, in terms of Δ*f */ Δ*x*. However, we actually have a much older definition of tangency.

(4) The above states an algorithm and a general theorem with the requirements that must be satisfied.

(5) Cartesius wins from Fermat on this issue of the incline (tangent), and actually also on providing an exact method for polynomials, where Fermat introduced the problem of error.

(6) For trigonometry and exponentials we know that these can be written as power series, and thus the Cartesian method would also apply. However, the power series are based upon derivatives, and this would introduce circularity. However, the method of the dynamic quotient from 2007 still allows an algebraic result. The further development from Fermat into the approach with limits would become relevant for more complex functions.

PM. The earlier discussion referred to Peter Harremoës (2016) and John Suzuki (2005) on this approach. New to me (and the book unread) are: Michael Range (2011), the recommendable Notices, or the book (2015) – review Ruane (2016) – and Shen & Lin (2014).

]]>

At issue is: Can we avoid the use of limits when determining the derivative of a polynomial ?

A sub-issue is: Can we avoid division that requires a limit ?

We use the term *incline* instead of *tangent (line)*, since this line can also cross a function and not just touch it.

We use *H *= -1, so that we can write *x x ^{H} *=

Ruffini’s Rule is a method not only to *factor* polynomials but also to *isolate* the factors. A generalised version is called “synthetic division” for the reason that it isn’t actually division. On wikipedia, Ruffini’s Rule is called “Horner’s Method“. On mathworld, the label “Horner’s Method” is used for something else but related again. My suggestion is to stick to mathworld.

Thus, the issue at hand would seem to have been answered by Ruffini’s Rule already. When we can avoid division then we don’t need a limit around it. However, our discussion is about whether this really answers our question and whether we really understand the answer.

I thank Peter Harremoēs for informing me about both Ruffini’s Rule and some neat properties that we will see below. His lecture note in Danish is here. Surprising for me, he traced the history back to Descartes. Following this further, we can find this paper by John Suzuki, who identifies two key contributions by Jan Hudde in Amsterdam 1657-1658. Looking into my copy of Boyer “*The history of the calculus*” now, page 186, I must admit that this didn’t register to me when I read this originally, as it registers now. We see the tug and push of history with various authors and influences, and thus we should be cautious about claiming who did what when. Suzuki’s statement remains an eye-opener.

“We examine the evolution of the lost calculus from its beginnings in the work of Descartes and its subsequent development by Hudde, and end with the intriguing possibility that nearly every problem of calculus, including the problems of tangents, optimization, curvature, and quadrature, could have been solved using algorithms entirely free from the limit concept.” (John Suzuki)

Apparently Newton dropped the algebra because it didn’t work on trigonometry and such, but with modern set theory we can show that the algebraic approach to the derivative works there too. For the discussion below: check that limits can be avoided.

When we have 2 *x *= 6, then we can determine 2 *x *= 2 3, and recognize the common factor 2. By the human eye, we can see that *x *= 3 and then we have isolated the factor 3. But in mathematics, we must follow procedures as if we were a computer programme. Hence, we have the procedure of eliminating 2, which is called division:

2* ^{H} *2

*x *= 3

The latter example abuses the property that 2 is nonzero. We must actually check that the divisor is nonzero. If we don’t check then we get:

4 *x *= 9 *x*

4 *x x ^{H} *= 9

4 = 9

Checking for zero is not as simple as it seems. Also expressions with only numbers might contain zero in hidden format, as for example (4 + 2 – 6)* ^{H}*. Thus it would seem to be an essential part of mathematics to develop a sound theory for the

Calculus uses the limit around the difference quotient to prevent division by zero. But the real question might rather be whether we can isolate a factor. When we can isolate that factor without division that requires a limit, then we hopefully have a simpler exposition. Polynomials are a good place to start this enquiry.

The real numbers form a “field” and when we drop the idea of division, then we get a “ring“. Above 2 *x *= 6 might also be solved in a ring without division. For we can do:

2 *x – *2 3 = 6 – 2 3

2 (x – 3) = 0

2 = 0 or x – 3 = 0

We again use that 2 ≠ 0. Thus x = 3.

This example doesn’t show a material difference w.r.t. the assumption of division by 2. We also used that 6 can be factored and that 2 was a common factor. Perhaps this is the more relevant notion. Whatever the case, it doesn’t seem to be so useful to leave the realm of the real numbers.

Our setup has a polynomial *p*[*x*] with focus of attention at *x* = a with point {*a, b*} = {*a, p*[*a*]}. When we regard (*x* – *a*) as a factor, then we get a “quotient” *q*[*x*] and a “remainder” *r*[*x*].

*p*[*x*] = (*x* – *a*) *q*[*x*] + *r*[*x*]

It is a nontrivial issue that *q* and *r *are polynomials again (proof of *polynomial division algorithm, *or proofwiki). These proofs don’t use limits but assume that the divisor is nonzero. Thus we might be making a circular argument when we use that *q* and *r *are polynomials to argue that limits aren’t needed. Examples can be given of polynomial long division. Such examples tend not to mention explicitly that the divisor cannot be zero. Nevertheless, let us proceed with what we have.

Since (*x* – *a*) has degree 1, the remainder must be a constant, and thus be equal to *p*[*a*]. Thus the “core equation” is:

*p*[*x*] = (*x* – *a*) *q*[*x*] + *p*[a] … (* core)

*p*[*x*] – *p*[a] = (*x* – *a*) *q*[*x*]

At *x *= *a *we get 0 = 0 *q*[a], whence we are at a loss about how to isolate *q*[*x*] or *q*[*a*].

When we have defined derivatives via other ways, then we can check that the derivative of (*) is:

*p’ *[*x*] = *q*[*x*] + (*x* – *a*) *q’ *[*x*]

*p’ *[*a*] = *q*[*a*]

We can also rewrite (*) so that it indeed looks like an difference quotient.

*q*[*x*]* = *(*p*[*x*] – *p*[a]) (*x* – *a*)* ^{H}* …. (** slope = tan[θ], see Spiegel’s diagram)

We cannot divide by (*x * – *a*) for *x *= *a*, for this factor would be zero.

PM. In the world of limits, we could define the derivative of *p* at *a *by taking the Limit[*x* *→ a, q*[*x*]]. This generates again (Spiegel’s diagram):

*q*[*a*] = tan[α]

But our issue is that we want to avoid limits.

The *incline* of the polynomial at point {*a, b*} = {*a, p*[*a*]} is the* line,* with the same slope as the polynomial.

*y – p*[*a*] = *s *(*x *– *a*) … (*** incline)

The difference between polynomial and incline might be called the *error. *Thus:

error = *p*[*x*] – *y *= (*p*[*x*] – *p*[*a*]) – (*y – p*[*a*])

= (*x *– *a*) *q*[*x*] – *s *(*x *– *a*)

= (*x *– *a*) (*q*[*x*] – *s*)

When we take *s *= *q*[*a*] then:

error = *p*[*x*] – *y *= (*x *– *a*) (*q*[*x*] – *q*[*a*])

A key question becomes: can we isolate *q*[*x*] by some method ? We already have (**), but this format contains the problematic division. Is there another way to isolate *q* ? There appear to be three ways. Likely these ways are essentially the same but emphasize different aspects.

The dynamic quotient manipulates the domain and relies on algebraic simplification. Instead of *H *we use *D, *with *y **x ^{D} *=

*q*[*x*] = (*p*[*x*] – *p*[*a*]) (*x* – *a*)^{D}

means: we first take *x **≠ a, *

then take *D = H, *so that this is normal division again,

then simplify,

and then declare the result also valid for *x *= *a.*

The idea was presented in ALOE 2007 while COTP 2011 is a proof of concept. COTP shows that it works for polynomials, trigonometry, exponentials and recovered exponents (logarithms). For polynomials it is shown by means of recursion.

Looking at this from the current perspective of the polynomial division algorithm, then we can say that the method also works because division of a polynomial of degree *n > *0 by a polynomial of degree *m *= 1 generates a neat polynomial of degree *n *– *m. *Thus we can isolate *q*[*x*] indeed. Since *q*[*x*] is polynomial, substitution of *x *= *a *provides no problem.

The condition on manipulating the domain nicely plugs the hole in the polynomial division algorithm. It is actually necessary to prevent circularity.

Via Descartes (and Suzuki’s article above) we understand that perpendicular to the incline (tangent) there is a line on which there is a circle that touches the incline too. This implies that (*x * – *a*) must be a double root of the polynomial.

We may consider *p*[*x*] / (*x *– *a*)^{2} and determine the remainder *v*[*x*]. The line *y *= *v*[*x*] then is the *incline*. Or, the equation of the tangent of the polynomial at point {*a, p*[*a*]}. It is relatively easy to determine the slope of this line, and then we have *q*[*a*].

Check the wikipedia example. In *Mathematica* we get PolynomialRemainder[*x*^3 – 12 *x*^2 – 42, (*x* – 1)^2, *x*] = -21 *x* – 32 indeed. At *a * = 1, *q*[*a*] = -21.

This method assumes *“algebraic ways”* to separate quotient and remainder. We can find the slope for polynomials without using the limit for the derivative. Potentially the same theory is required for the simplification used in the dynamic quotient.

Remarkably, the method presumes *x **≠ a, *and still derives *q*[*a*]. I cannot avoid the impression that this method still has a conceptual hole.

**Addendum** 2017-01-11: By now we have identified these methods to isolate a factor “algebraically”:

- Look at the form (powers) and coefficients. This is basically Ruffini’s rule, see below. Michael Range works directly with coefficients.
- Dynamic quotient that relies on the algebra of expressions.
- Divide away nonzero factors so that only the problematic factor remains that we need to isolate. (This however is a version of the dynamic quotient, so why not apply it directly ?)

An example of the latter is *p*[*x*] = *x*^3 – 6 *x*^2 + 11 *x* – 6. Trial and error or a graph indicates that zero’s are at 1 and 2. Assuming that those points don’t apply we can isolate *p*[*x*] / ((*x* – 1) (*x* – 2)) = (*x* – 3) by means of long division. Subsequently we have identified the separate factors, and the total is *p*[*x*] = (*x* – 1) (*x* – 2) (*x* – 3).

Check also that “division” is repeated subtraction, whence the method is fairly “algebraic” by itself too.

**Addendum** 2016-12-26: However, check the next weblog entry.

The traditional method is to use the derivative p'[*x*] = 3 *x*^2 – 24 *x, *find slope *p*‘[1] = -21, and construct the line *y *= -21 (*x* – 1) + *p*[1]. This method remains didactically preferable since it applies to all functions.

If *p*[*x*] = 0 has solution *x * = *a*, then the latter is called a root, and we can factor *p*[*x*] = (*x *– *a*) *q*[*x*] with remainder zero.

For example, *p*[*x*] – *p*[*a*] = 0 has solution *x * = *a. *Thus *p*[*x*] – *p*[*a*] = (*x *– *a*) *q*[*x*] with remainder zero.

Also *q*[*x*] – *q*[*a*] = 0 has solution *x * = *a. *Thus *q*[*x*] – *q*[*a*] = (*x *– *a*) *u*[*x*] with remainder zero.

Thus the error has a double root.

error = *p*[*x*] – *y *= (*x *– *a*)^{2} *u*[*x*]

Unfortunately, this insight only allows us to check a given line *y *= *s x *+* c, *for then we can eliminate *y. *

See above for the summary of Ruffini’s Rule and the links. For the application below you might want to become more familiar with it. Check why it works. Check how it works, or here.

The observation of the double root generates the idea of applying Ruffini’s Rule twice.

I don’t think that it would be so useful to teach this method in highschool. Mathematics undergraduates and teachers better know about its existence, but that is all. The method might be at the core of efficient computer programmes, but human beings better deal with computer algebra at the higher level of interface.

The assumption that *x *≠ *a* goes without saying, but it remains useful to say it, because at some stage we still use *q*[*a*], and we better be able to explain the paradox.

Let us use the example of Ruffini’s Rule at MathWorld to determine the incline (tangent) to their example polynomial 3 *x*^3 – 6 *x* + 2, at *x *= 2. They already did most of the work, and we only include the derivative.

The first round of application gives us *p*[*a*] = *p*[2] = 14, namely the constant following from MathWorld.

A second round of application gives the slope, *q*[a] = 30.

2 | 3 6 6

6 24

3 12 30

Using the traditional method, the derivative is *p’ *[*x*]* = *9 *x*^2 – 6, with *p*‘[2] = 30.

The incline (tangent) in both cases is *y * = 30 (*x *– 2) + 14 = 30 *x *– 46.

The major conceptual issue is: while *s* is the slope of a line, and we take *s* = *q*[*a*], why would we call *q*[*a*] the slope of the polynomial at* x* = *a* ? Where is the element of “inclination” ? We might have just a formula of a line, without the notion of slope that fits the function. In other words, *q*[*a*] is just a number and no concept.

The key question w.r.t. this issue of the limit – and whether division causes a limit – is not *quite *w.r.t. Ruffini’s Rule but with the definition of slope, first for the line itself, secondly now for the incline of a function. We represent the *incline of a function* with a *line,* but only because it has the property of having a slope and angle with the horizontal axis.

The only reason to speak about an incline is the recognition that above equation (**) generates a slope. We are only interested in *q*[*a*] = tan[α] since this is the special case at the point *x *= *a* itself.

It is only *after this notion of having a slope has been established*, that Ruffini’s Rule comes into play. It focuses on “factoring as synthetic division” since that is how it has been designed. There is nothing in Ruffini’s Rule that clarifies what the calculation is about. It is an algorithm, no more.

Thus, for the argument that *q*[*a*] provides the slope at *x* = *a*, we still need the reasoning that first x ≠ a, then find a general expression *q*[*x*] and only then find *x* = *a*.

And this is what the algebraic approach to the derivative was designed to accomplish.

**Addendum** 2016-12-26: See the next weblog entry for another approach to the notion of the incline (tangency).

Ruffini’s Rule corroborates that the method works, but that it works had already been shown. However, it is likely a mark of mathematics that all these approaches are actually quite related. In that perspective, the algebraic approach to the derivative supplements the application of Ruffini’s Rule to clarify what it does.

Obviously, mathematicians have been working in this manner for ages, but implicitly. It really helps to state explicitly that the domain of a function can be manipulated around (supposed) singularities. The method can be generalised as

*f* ‘[*x*] = {Δ*f *(Δ*x*)^{D}*, *then set Δ*x* = 0} = {Δ*f // *Δ*x**, *then set Δ*x* = 0}

It also has been shown to work for trigonometry and the exponential function.

]]>

- When they say that “you can present the derivative for polynomials without limits” then they mean this only for
*didactics*and not for*mathematics*. - But they are not trained in didactics, so they are arguing this as a hobby, as mathematicians with a peculiar view on didactics. They provide a course for mathematics teachers, but this concerns mathematics and not didactics.
- They only hide the limit, but they do not deny that
*fundamentally*you must refer to limits. - Eventually they still present the limit to maintain exactness, but then it has no other role than to link up to a later course (perhaps only for mathematicians).
- Thus, they make the gap between “didactics” and proper “mathematics” larger
*on purpose.* - This is quite different from the algebraic approach (see here), that
*really*avoids limits, and also argues that limits are fundamentally irrelevant (for the functions used in highschool).

I have invited Hulshof since at least 2013 (presentation at the NVvW study day) to look at the algebraic approach to the derivative. He refuses to look into it and write a report on it, though he was so kind to look at this recent skirmish.

Hulshof refers to his approach perhaps as sufficient. It is quite unclear what he thinks about all this, since he doesn’t discuss the proposal of the algebraic approach to the derivative.

Let me explain what is wrong with their approach with the polynomials.

Please let mathematicians stop infringing upon didactics of mathematics. It is okay to check the quality of mathematics in texts that didacticians produce, but stop this “hobby” of second-guessing.

PM. A recent text is Hulshof & Meester (2015), “*Wiskunde in je vingers*“, VU University Press (EUR 29.95). Potentially they have improved upon the exposition in the pdf, but I am looking at the pdf only. Meester lists this book as “books mathematics” (p14). Hulshof calls it “concepts from mathematics” with “uncommon viewpoints” for “teacher, student” and for “education and curriculum”. When you address students then it is didactics. It is unclear why VU University Press thinks that he and Meester are qualified for this.

A standard notation for a line is *y *= *c *+ *s x*, for constant *c* and slope *s. *

The line gives us the possibility of a definition of *the incline *(Dutch: *richtlijn*). An incline is defined for a function and a point. An incline of a function *f* at a point {*a,* *f*[*a*]} is a line that gives the slope of that function at that point.

It is wrong to say that the incline “has the same slope”. You are not comparing two lines. You are looking at the slope. You only know the slope of the function because of the incline (the line with that slope).

The incline is often called the *tangent. *Students tend to think that *tangents cannot cross the function, *while tangents actually can. Thus *incline *can be a better term.

Hulshof & Meester refer in horror to the *Oxford Advance Learner’s Dictionary, *that has:

ERROR “Tangent: (geometry) a straight line that touches the outside of a curve but does not cross it. The cart track branches off at a tangent.”

I don’t think that “incline” will quickly replace “tangent”. But it is useful to discuss the issue with students and offer them an alternative word if “tangent” continues to confuse them. It is useful to start a discussion with students by mentioning the (quite universal) intuition of *not*-crossing. An orange touches a table, and doesn’t cross it. But mathematically it would be quite complex to test whether there is any crossing or not. Thus it is simpler to focus on the idea of *incline, **straight course, alignment. *

When you swing a ball and then let go, then the ball will continue in the incline of the last moment. The incline captures that idea, by giving the line with that very slope.

I thank Peter Harremoës for a discussion on this (quite universal) confusion by students (and the OALD) and potential alternative terms. (*Incline *is still a suggestion.) (The word “directive” was rejected as too confusing with “derivative”. But Dutch “richtlijn” is better than raaklijn.)

A polynomial of degree *n* has powers of *x *of size *n*:

*p*[*x*] = *c *+ *s x + c*_{2} *x*² + … + * c _{n}*

In this, we take *c* = *c*_{0} and *s *= *c*_{1}. For *n *= 1 we get the line again. We allow that the line has *s *= 0, so that we can have a horizontal line, which would strictly be a polynomial of *n* = 0. There is also the vertical line*, *that cannot be represented by a polynomial.

If *p*[*a*] = 0 then *x* = *a *is called a zero of the polynomial. Then (*x *– *a*) is called a factor, and the polynomial can be written as

*p*[*x*] = (*x *– *a*) *q*[*x*]

where *q*[*x*] is a polynomial of a lower degree.

If *p*[*a*] ≠ 0 then we can still try to factor with (*x *– *a*) but then there will be a remainder, as *p*[*x*] = (*x *– *a*) *q*[*x*] + *r*[*x*]. When we consider *p*[*x*] – *r*[*x*] then *x* = *a *is a zero of this. Thus:

*p*[*x*] – *r*[*x*] = (*x *– *a*) *q*[*x*]

With polynomials we can do long division as with numbers. The following example is the division of *x**³ – 7x* – 6 by *x *– 4 that generates a remainder.

Regard the polynomial *p*[*x*] at *x * = *a, *so that *b* = *p*[*a*]. We consider point {*a*, *b*}. What incline does the curve have ?

(A) For the incline we have the line in {*a, b*}:

*y *– *b* = *s *(*x *–* a*)

(B) We have *p*[*a*] – *b *= 0 and thus *x *= *a* is a zero of the polynomial *p*[*x*] – *b*. Thus:

*p*[*x*] – *b* = (*x *– *a*) *q*[*x*]

(C) Thus (A) and (B) allow to assume *y **≈ p*[*x*] and to look at the common term *x *– *a,* *“so that” (quotes because this is problematic)*:

*s *= *q*[*a*]

The example by Hulshof & Meester is *p*[*x*] = *x²* – 2 at the point {*a,* *b*} = {1, -1}.

*p*[*x*] – *b = *(*x² – 2) – *(-1) = *x²* – 1

Factor: (*x²* – 1) = (*x* – 1) *q*[*x*]

Or divide: *q*[*x*] = (*x²* – 1) / (*x* – 1) = *x *+ 1

Substituting the value *x *= *a *= 1 in *x *+ 1 gives *s *= *q*[*a*] = *q*[1] = 2.

H&M apparently avoid division by using the process of *factoring. *

Later they mention the limiting process for the division: Limit[*x *→ 1, *q*[*x*]] = Limit[*x *→ 1, (*x²* – 1) / (*x* – 1)] = 2.

As said, the H&M approach is convoluted. They have no background in didactics and they hide the limit (rather than explaining its relevance since they still deem it relevant).

Mathematically, they might argue that they don’t divide but only factor polynomials.

- But, when you are “factoring systematically” then you are actually dividing.
- When you use “realistic mathematics education” then you can approximate division by trial and error of repeated subtraction, but I don’t think that they propose this. See the “partial quotient method” and my comments.
**Addendum**December 22: there is a way to look only at coefficients, Ruffini’s Rule, in wikipedia called Horner’s method. A generalisation is known as synthetic division, which expresses that it is no real division, but a method of factoring. (MathWorld has a different entry on “Horner’s method“.) See the next weblog entry.

When dividing systematically, you are using algebra, and you are assuming that a denominator like *x * – 1 isn’t zero but an abstract algebraic term. Well, this is precisely what the algebraic approach to the derivative has been proposing. Thus, their suggestion provides support for the algebraic approach, be it, that they do it somewhat crummy and non-systematically, whence it is little use to refer to this kind of support.

Didactically, their approach is undeveloped. They compare the slopes of the polynomial and the line, but there is no clear discussion why this would be a slope, or why you would make such a comparison. Basically, you can compare polynomials of order *n *with those of order *m, *and this would be a mathematical exercise, but devoid of interpretation. For didactics it does make sense to discuss: (a) the notion of “slope” of a function is given by the incline, (b) we want to find the incline of a polynomial for a particular reason (e.g. instantaneous velocity), (c) we can find it by a procedure called “derivative”. NB. My book *Conquest of the Plane *starts with surface and integral, and only later looks at slopes.

A main criticism however is also that H&M overlooked the fundamental problem with the notion of a slope of a line itself. They rely on some hidden issues here too. I discussed this recently, and repeat this below.

PM. See a discussion of approximating a function by polynomials. Observe that we are not “approximating” a function by its incline now. At {*a*, *b*} the values and slope are *exactly* the same, and there is nothing approximate about this. Only at other points we might say that there is an “error” by looking at the incline rather than the polynomial, but we are not looking at such errors now, and this would be a quite different topic of discussion.

Let us first consider a ray through the origin, with horizontal axis *x* and vertical axis *y. *The ray makes an angle α with the horizontal axis. The ray can be represented by a function as* y = f *[*x*] = *s x, *with the slope *s *= tan[α]. Observe that there is no constant term (*c* = 0).

The quotient *y* / *x *is defined everywhere, with the outcome *s, *except at the point *x *= 0, where we get an expression 0 / 0. This is quite curious. We tend to regard *y */ *x *as the slope (there is no constant term), and at *x *= 0 the line has that slope too, but we seem unable to say so.

There are at least three responses:

(i) Standard mathematics then takes off, with *limits* and *continuity*.

(ii) A quick fix might be to try to define a separate function to find the slope of a ray, but we can wonder whether this is all nice and proper, since we can only state the value *s *at 0 when we have solved the value elsewhere. If we substitute *y *when it isn’t a ray, or example *y *= *x*², then we get a curious construction, and thus the definition isn’t quite complete since there ought to be a test on being a ray.

(iii) The algebraic approach uses the following definition of the *dynamic quotient*:

*y* // *x* ≡ { *y* /* x*, unless *x* is a variable and then: assume *x* ≠ 0, simplify the expression* y* / *x*, declare the result valid also for the domain extension *x* = 0 }

Thus in this case we can use *y* // *x = **s x *// *x *= *s, *and this slope also holds for the value *x *= 0, since this has now been included in the domain too.

When we have a line *y *= *c *+ *s x*, then a* hidden part of the definition* is that the slope is *s everywhere, *even though we cannot compute (*y *– *c*) / *x* when *x *= 0. (One might say: “This is what it means to be linear.”)

When we look at *x *= *a *and determine the slope by taking a difference Δ*x, *then we get:

*b *= *c *+ *s a*

*b *+ Δ*y *= *c *+ *s *(*a* *+ *Δ*x*)

Δ*y *= *s * Δ*x*

The slope at *a *would be *s *but is also Δ*y* / Δ*x, *undefined for Δ*x *= 0

Thus, the slope of a line is either given as *s* for all points (or, critically for *x *= 0 too) (perhaps with a rule: if you find a slope somewhere then it holds everywhere), or we must use limits.

The latter can be more confusing when *s *has not been given and must be calculated from other resources. In the case of differentials d*y *= *s* d*x, *the notation d*y */ d*x *causes conceptual problems when *s *itself is found by a limit on the difference quotient.

- The H&M claim that polynomials can be used without limits is basically a didactic claim since they evidently still rely on limits (perhaps to fend of fellow mathematicians). This didactic claim is a wild-goose chase since they are not involved in didactics research.
- If they really would hold that factoring can be done systematically without division, then they might have a point, but then they still must give an adequate explanation how you get from (A) & (B) to (C). Saying that differences are “small” is not enough (not even for polynomials).
**Addendum**December 22: see the next weblog entry on Ruffini’s rule. - They present this for a “reminder course in mathematics” for teachers of mathematics, but it isn’t really mathematics and it is neither useful for teaching mathematics.
- A serious development that avoids limits and relies on algebraic methods, that covers the same area of polynomials but also trigonometry and exponential functions, is the algebraic approach to the derivative, available since 2007 with a proof of concept in
*Conquest of the Plane*in 2011. - It is absurd that Hulshof & Meester neglect the algebraic approach. But they are mathematicians, and didactics is not their field of research. I think that the algebraic method provides a fundamental redefinition of calculus, but I prefer the realm of didactics above the realm of mathematics with its culture of contempt for empirical science.
- The H&M exposition and neglect is just an example of Holland as Absurdistan, and the need to boycott Holland till the censorship of science by the directorate of the Dutch Central Planning Bureau has been lifted.

]]>