Posted by: Gary Ernest Davis on: April 15, 2011
It might come as a surprise to some people, but kids are not stupid.
So what’s a little kid to make of the following question:
“Bobby
What’s a kid to think?
Did Jennifer count the spots?
How did she know here were 3 fewer spots.?
Could we ask Jennifer how many spots her dog has?
Aren’t these sensible questions?
Much more sensible than the original “word problem”.
Kids are not fooled by these questions. They know they are not real mathematical problems. They know these problems are just invented as puzzles to test if they know what “fewer” means.
We lose kids trust if we pretend these ridiculous puzzles have anything to do with real mathematical problems.
Here’s another contrived question at a more advanced level:
“B stuffs twice as many envelopes as A in half the time. If they stuff a total of 700 (in same time) how many did B stuff?”
Well, why not ask B? Surely he or she knows how many envelopes they stuffed?
How else did they, or someone else, figure that B stuffs twice as many envelopes as A in half the time?
Is that an historical fact, always true, or is it only true in this instance?
The problem is contrived, and it doesn’t take a particularly smart kid to realize that this is a setup, being used to force them to carry out certain arithmetic operations.
It’s not a real problem, in other words: it’s a phony problem built around a mathematical task.
What’s being tested here?
The ability to convert “twice as many in half the time” to a ratio of 4:1?
The ability to use the 4:1 ratio to realize that of a certain number of envelopes, B stuffed and A stuffed ?
The ability to calculate ?
And if a student gets the answer wrong, they were wrong because …?
And if the question is multiple choice (which it would be because this is a proposed SAT question) did a student get the correct answer because they went through these steps, or by eliminating some answers, or by plugging in one of the proposed answers?
In other words, if this question is testing something, success or failure in answering the question does not tell us whether the student succeeded or failed on what was supposedly being tested.
This sort of question has nothing – should not have anything – to do with the serious practice of mathematics.
The same structure of the problem could be used to solve problems that really does need to be solved. For example:
“Brenda gets paid every 2 weeks, and Allyson gets paid each month. Brenda and Allison decide to jointly invest in a company’s stock. They agree that Brenda will put in twice as much money each 2 weeks as Allyson puts in each month. When they have together purchased a total of 700 shares the company will offer a dividend of per share. Allyson and Brenda want to figure how much they each will receive in dividends at that time.”
This is now a real problem for Brenda and Allyson.
Brenda and Allyson set up the process of buying shares on a 4:1 ratio – that was given before we knew anything else. When the share dividend comes due the two women want to know how much they will get in dividends. That’s a sensible, real problem.
We run a severe risk of making mathematics appear to be a bunch of tricks to solve problems that are contrived in order to simply test some procedure or other knowledge.
A leader in the use of real, sensible word problems as contrasted with contrived, and often foolish, word problems is Lieven Verschaffel at the University of Leuven, Belgium.
Here is the abstract from an article published in the Journal for Research in Mathematics Education, 1997, Vol. 28, No. 5, pp. 577-601, with Erik De Corte:
“Recent research has convincingly documented elementary school children’s tendency to neglect real-world knowledge and realistic considerations during mathematical modeling of word problems in school arithmetic. The present article describes the design and the results of an exploratory teaching experiment carried out to test the hypothesis that it is feasible to develop in pupils a disposition toward (more) realistic mathematical modeling. This goal is achieved by immersing them in a classroom culture in which word problems are conceived as exercises in mathematical modeling, with a focus on the assumptions and the appropriateness of the model underlying any proposed solution. The learning and transfer effects of an experimental class of 10- and 11-year-old pupils–compared to the results in two control classes-provide support for the hypothesis that it is possible to develop in elementary school pupils a disposition toward (more) realistic mathematical modeling.“
This was published over 13 years ago in the Journal for Research in Mathematics Education, an American publication that is widely recognized as the leading journal for mathematics education research.
Still, over 13 years later, and despite much research and development in realistic word problems, we are, in the United States at least, promoting contrived word problems that in my opinion signal to students we are either lying to them or treating them as stupid.
It is time to stop this nonsense, and focus on realistic word problems.
This does NOT mean such problems have to have their origins in applications outside mathematics itself.
For example, a problem involving number theory or geometry can be a realistic problem within mathematics, as are most of those discussed by James Tanton (@jamestanton) or Alexander Bogomolny (@CutTheKnotMath).
Posted by: Gary Ernest Davis on: April 14, 2011
A valuable feature of quantitative reasoning – also known as numeracy, or quantitative literacy – is its ability to act as a BS detector.
John Allen Paulos (@JohnAllenPaulos)
This morning I read an account in The Guardian of how people are no better at predicting whether a wine is cheap or expensive than they are at predicting a coin toss:
Expensive wine and cheap plonk taste the same to most people
Ian Sample, science correspondent
The Guardian, Thursday 14 April 2011
In a blind taste test, volunteers were unable to distinguish between expensive and cheap wine
An expensive wine may well have a full body, a delicate nose and good legs, but the odds are your brain will never know.
A survey of hundreds of drinkers found that on average people could tell good wine from plonk no more often than if they had simply guessed.
In the blind taste test, 578 people commented on a variety of red and white wines ranging from a £3.49 bottle of Claret to a £29.99 bottle of champagne. The researchers categorised inexpensive wines as costing £5 and less, while expensive bottles were £10 and more.
The study found that people correctly distinguished between cheap and expensive white wines only 53% of the time, and only 47% of the time for red wines. The overall result suggests a 50:50 chance of identifying a wine as expensive or cheap based on taste alone – the same odds as flipping a coin.
Richard Wiseman, a psychologist at Hertfordshire University, conducted the survey at the Edinburgh International Science Festival.
“People just could not tell the difference between cheap and expensive wine,” he said. “When you know the answer, you fool yourself into thinking you would be able to tell the difference, but most people simply can’t.”
All of the drinkers who took part in the survey were attending the science festival, but Wiseman claims the group was unlikely to be any worse at wine tasting than a cross-section of the general public.
“The real surprise is that the more expensive wines were double or three times the price of the cheaper ones. Normally when a product is that much more expensive, you would expect to be able to tell the difference,” Wiseman said.
People scored best when deciding between two bottles of Pinot Grigio, with 59% correctly deciding which was which. The Claret, which cost either £3.49 or £15.99, fooled most people with only 39% correctly identifying which they had tasted.
___________________________________________________________________________
So what does this article, and the study conducted by Professor Wiseman, say about people’s ability to distinguish wines?
The headline proclaims that “Expensive wine and cheap plonk taste the same to most people“, but is that what the study showed?
What we are able to gather is that about half the time people could not tell the difference between cheap and expensive wine.
Does this mean that each individual person might as well have tossed a coin to decide on wine quality?
Not at all.
Imagine that half the people tested were experts and got it right every time, while the other half were dunces who got it wrong every time.
On average, people would be right half the time.
Could this happen any other way?
Of course: half the people, the “experts”, might get it right 75% of the time, while the other half the people got it wrong 75% of the time.
Again, on average, people would be right half the time.
There are many ways that a heterogeneous group of people could come out getting the wine quality right half the time on average.
This says nothing about any individual person as the headline seems to suggest.
What the study does suggest is that if we randomly chose a person from the population tested and asked them to judge the quality of wine we would see a correct result about half the time.
That’s quite different to saying that if we choose an individual and test them repeatedly, they will be right about half the time.