New York City Schools Chancellor Joel Klein often quotes the commission before discussing how U.S. schools have fared since it issued its report. Despite nearly doubling per capita spending on education over the past few decades, American 15-year olds fared dismally in standardized math tests given in 2000, placing 18th out of 27 member countries in the Organization for Economic Co-operation and Development. Six years later, the U.S. had slipped to 25th out of 30. If we've been fighting against mediocrity in education since 1983, it's been a losing battle.*The OECD tests are the book of Revelations of the education reform movement, the great ominous portent to be invoked in the presence of critics and non-believers. Putting aside questions of the validity and utility of this test (perhaps for another post if my stamina holds out), we would certainly like to be in the top ten rather than the bottom.
But before we concede this one, lets pull out our well-thumbed copy of Huff and take one more look. Whenever one side in a complex debate keeps pulling out one particular statistic, you should always take a moment and check for cherry-picking.
Is Fisman distorting the data by being overly selective when picking statistics to bolster his case? Yes, and he's doing it in an egregious way.
Take a look at at the Trends in International Mathematics and Science Study. Here's a passage from the executive summary from the National Center for Education Statistics:
In 2007, the average mathematics scores of both U.S. fourth-graders (529) and eighth-graders (508) were higher than the TIMSS scale average (500 at both grades). The average U.S. fourth-grade mathematics score was higher than those of students in 23 of the 35 other countries, lower than those in 8 countries (all located in Asia or Europe), and not measurably different from those in the remaining 4 countries. At eighth grade, the average U.S. mathematics score was higher than those of students in 37 of the 47 other countries, lower than those in 5 countries (all of them located in Asia), and not measurably different from those in the other 5 countries.We could spend some time in the statistical weeds and talk about the methodology of TIMSS vs. OECD's PISA. TIMSS is the better established and arguably better credentialed, but both are serious efforts mounted by major international organizations and it would be difficult to justify leaving either out of the discussion.
If Fisman had limited his focus to the education of high school students and simply ignored the data involving earlier grades, we would have ordinary misdemeanor-level cherry-picking. Not the most ethical of behavior, but the sort of thing most of us do from time to time.
But Fisman does something far more dishonest; he quietly shifts the subject to teachers in general and often to elementary teachers in particular (take a good look at the study that's at the center of Fisman's article).
This means that, when you strip away the obfuscation, you get the following argument.
1. The best metrics for tracking American education are international rankings on math tests;
2. The best way of improving America's education system is fire massive numbers of teachers, including those in areas where we are doing well on international rankings on math tests.
The bad news here is that we have a long way to go to make it through Fisman's article and it doesn't get much better, but the good news is that we're through with the second paragraph.
* Fought, for the most part with Klein and Fisman's battle plan but we've already covered that.