[David] COLEMAN: The new math section will focus on three things: Problem solving and data analysis, algebra and real world math related to science, technology and engineering fields.The response from most journalists and pundits to this push for applicability has been either disinterest or mild approval, but if you dig into the underlying statistics and look into the history of similar educational initiatives, it's hard not to come away with the conclusion that this change pretty much has got to be bad (with a better than even chance of terrible).
The almost inevitable bad outcome will be the nearly unavoidable hit taken by orthogonality. As discussed earlier, the value of a variable (such as an SAT score) in a model lies not in how much information it brings to the model but in how much new information it brings given what the other variables in the model have already told us. Models that colleges use to assess students (perhaps with trivial exceptions) include courses taken and grades earned. We want additional variables to that model to be as uncorrelated as possible with those transcript variables. The math section of the SAT does this by basing its questions on logic, problem solving and on basic math classes that everyone should have taken before taking the SAT. Students whose math education stopped at Algebra I should be on a roughly equal footing with students who took AP calculus, as long as they understood and retained what they learned.
Rather than making the SAT a more effective instrument, "real world" problems only serve to undercut its orthogonality. Meaningful applied questions will strongly tend to favor students who have taken relevant courses. It might be possible to avoid this trap, but it would be extremely difficult and there's no apparent reason for making the change other than the vague but positive connotations of the phrase. (It's important to note here that Coleman's background is in management consulting and the ability to work positive-sounding phrases into presentations is very much a core skill in that field.)
Even more worrisome is the potential for the really bad question, bad enough to have the perverse effect of actually causing more problems (in stress and lost time) for those kids who understand the material. Nothing throws a good student off track worse than a truly stupid question.
Even if the test-makers know what they're doing, writing good, situation-appropriate problems using real situations and data is extraordinarily difficult. The vast majority of the time, real life has to be simplified to an unrealistic degree to make it suitable for a brief math problem. The end result is usually just an old problem with new nouns, take a rate problem and substitute "computer programmer" for "ditch digger."
You can make a fairly good case for real world questions based on teaching-across-the-curriculum -- for example, using Richter scale in a homework problem is a good way of working in some earth science -- but since the purpose of the SAT is to measure, not to instruct, that argument doesn't hold here.
The even bigger concern is what can happen when the authors don't know what they're doing.
From Richard Feynman's "Judging Books by their Covers":
Finally I come to a book that says, "Mathematics is used in science in many ways. We will give you an example from astronomy, which is the science of stars." I turn the page, and it says, "Red stars have a temperature of four thousand degrees, yellow stars have a temperature of five thousand degrees . . ." -- so far, so good. It continues: "Green stars have a temperature of seven thousand degrees, blue stars have a temperature of ten thousand degrees, and violet stars have a temperature of . . . (some big number)." There are no green or violet stars, but the figures for the others are roughly correct. It's vaguely right -- but already, trouble! That's the way everything was: Everything was written by somebody who didn't know what the hell he was talking about, so it was a little bit wrong, always!Keep in mind, Feynman's example was picked to be amusing but representative ("That's the way everything was...a little bit wrong, always"). The post-Sputnik education reformers of his day were making pretty much the same demands that today's reformers are making. There's no reason to expect a better result this time.
Anyway, I'm happy with this book, because it's the first example of applying arithmetic to science. I'm a bit unhappy when I read about the stars' temperatures, but I'm not very unhappy because it's more or less right -- it's just an example of error. Then comes the list of problems. It says, "John and his father go out to look at the stars. John sees two blue stars and a red star. His father sees a green star, a violet star, and two yellow stars. What is the total temperature of the stars seen by John and his father?" -- and I would explode in horror.
Of course, there are good questions that do use real-world data (you can even find some on the SAT), but in order to write them you need a team that understands both the subtleties of the material and the statistical issues involved in testing it.
The more I hear from David Coleman, whether it concerns the College Board or Common Core, the less confidence I have in his abilities to head these initiatives.