Sunday, September 5, 2010

Statistical significance (a never-ending series)

Andrew Gelman has a post on a mis-definition of the p-value. I want to focus on another aspect of the quote:

Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you're testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke - but still, there is always a certain probability that it was.

Statistical significance testing gives you an idea of what this probability is.


This is not only an incorrect definition of the p-value but it also appears to be ignoring the possibility of bias and/or confounding. Even in a randomized drug trial (and drug trials are explicitly being used as an example), it is possible to induce selection bias due to non-random loss to follow-up in any non-trivial study. After all, many drugs are such that the participants can guess their exposure status (all analgesics have this unfortunate property) and this can lead to a differential study completion rate among some sub-groups. For some outcomes (all-cause mortality), complete ascertainment can be done using an intention to treat approach to analysis. But that typically induces a uniform bias towards the null.

I am always uncomfortable with how these strong and unverifiable assumptions are glossed over in popular accounts of pharmacoepidemiology.

Saturday, September 4, 2010

Another troubling study

From the EPI paper:
Because of the range of influences on student learning, many studies have confirmed that estimates of teacher effectiveness are highly unstable. One study examining two consecutive years of data showed, for example, that across five large urban districts, among teachers who were ranked in the bottom 20% of effectiveness in the first year, fewer than a third were in that bottom group the next year, and another third moved all the way up to the top 40%. There was similar movement for teachers who were highly ranked in the first year. Among those who were ranked in the top 20% in the first year, only a third were similarly ranked a year later, while a comparable proportion had moved to the bottom 40%.
What's really amazing here is that the authors of the fire-the-bottom-80-percent paper actually cite other work by Timothy Sass and yet manage to overlook this.

EPI Briefing Paper -- Problems with the Use of Student Test Scores to Evaluate Teachers

In terms of education reform, this is probably the biggest story to come over the wires in a long time:
While there are good reasons for concern about the current system of teacher evaluation, there are also good reasons to be concerned about claims that measuring teachers’ effectiveness largely by student test scores will lead to improved student achievement. If new laws or policies specifically require that teachers be fired if their students’ test scores do not rise by a certain amount, then more teachers might well be terminated than is now the case. But there is not strong evidence to indicate either that the departing teachers would actually be the weakest teachers, or that the departing teachers would be replaced by more effective ones. There is also little or no evidence for the claim that teachers will be more motivated to improve student learning if teachers are evaluated or monetarily rewarded for student test score gains.
Read the paper here, then take a look at Kenneth J. Bernstein's detailed analysis and Joseph's brief explanation of why we should listen to Donald Rubin.

Friday, September 3, 2010

The principal effect -- a footnote to the last post

When it comes to education reform, you can't just refer to the elephant in the room. It's pretty much elephants everywhere you look. There is hardly an aspect of the discussion where reformers don't have to ignore some obvious concern or objection.

The elephant of the moment is the effect that principals and other administrators have on the quality of schools. Anyone who has taught K through 12 can attest to the tremendous difference between teaching in a well-run and a badly-run school. Even the most experienced teacher will find it easier to manage classes, cover material, and keep students focused. All of those things help keep test scores up, as does the lower rate of burn out. For new teachers, the difference is even more dramatic.

On top of administrator quality, there is also the question of compatibility. In addition to facing all the normal managerial issues. teacher and and principal have to have compatible educational philosophies.

As we've mentioned more than once on this site, educational data is a thicket of confounding and aliasing issues. That thicket is particularly dense when you start looking at teachers and principals and, given the concerns we have about the research measuring the impact of teachers on test scores, I very much doubt we will ever know where the teacher effect stops and the principal effect starts.

Addiction

I think that it is easy to understate how hard it can be to eliminate an addiction, even one where we know about the potential harms (i.e. smoking). I found this passage really interesting:

One time I had just enough money to put in the electric meter or buy a packet of cigarettes. There I was sat in the dark smoking like a chump trying to comfort myself with the fact that it was not crack.


These sorts of tales really make me ponder whether we should focus more on "harm reduction" and if elimination may not be a quixotic pursuit.

Thursday, September 2, 2010

Oh, Canada -- another interesting omission in "Clean Out Your Desk"

We're back with our ongoing coverage of Ray Fisman's recent article in Slate which ran with the provocative tagline "Is firing (a lot of) teachers the only way to improve public schools?" (notice that he didn't say "a way" or "the best way").

If you tuned in late, here's what you need to know:

Dr. Fisman starts by discussing a presidential commission report from the early Eighties that said the damage done by our poor educational system was comparable to an act of war. This somewhat apocalyptic language has since become a staple of the reform movement. It grabs the attention, justifies big, expensive, untried steps and sets up a false dichotomy between action and inaction.

The proceedings are then handed over to Joel Klein. Klein builds on the verge-of-disaster theme by invoking the United States' low ranking on the Organization for Economic Co-operation and Development's PISA tests. I've commented at some length on the implications of citing PISA while completely ignoring the better-established and well-respected TIMMS even when the discussion shifted to elementary schools where the TIMMS scores would seem to be far more relevant. (The term cherry-picking did come up.)

For now, though, let's grant Chancellor Klein and Dr. Fisman the benefit of the doubt. Let's say we accept the premise that OECD's PISA rankings are such a good and reliable measure of the state of a nation's schools that we don't even need to look at other metrics. We'll even stipulate for the sake of argument that a bad PISA ranking is sufficient grounds for radical measures. With all of these conditions in place, take close look at the next part of Dr. Fisman's article:

What could turn things around? At a recent event that I organized at the Columbia Business School, Klein opened with his harsh assessment of the situation, and researchers offered some stark options for getting American education back on track. We could find drastically better ways of training teachers or improve our hiring practices so we're bringing aboard better teachers in the first place. Barring these improvements, the only option left is firing low-performing teachers—who have traditionally had lifetime tenure—en masse.

The emphasis on better teachers—through training, selection, or dismissal—comes from the very consistent finding that improving faculty is one of the best, most reliable ways to improve schools. If the person standing at the front of the classroom has raised the test scores of students he's taught before, he's likely to do so again.

But how do you get good teachers in the classroom? Unfortunately, it turns out that most evidence points toward great instructors being born, not made. National board certification may help a bit, a master's degree in education not at all. It's also difficult to pick out the best teachers based on a résumé or even a sample lesson. It takes a year or so before evaluators (and even teachers themselves) know who is really good at getting kids to learn, and few qualifications are at all correlated with teaching ability. Candidates with degrees from prestigious colleges—the type where Teach for America does much of its recruiting—do a bit better, but not much.
Here's the gist of Dr. Fisman's premise:

1. According to PISA (the test that trumps all other tests) the state of U.S. education is dire;

2. We need to improve the quality of our teachers "through training, selection, or dismissal";

3. So far, no one has found a way to make training or selection work.

If we want education to do well we might just have to start firing teachers en masse, and by "do well," we mean outscore other countries, which raises the question, "How do other countries find all of those natural teachers?"

Of course, comparing educational systems of different countries can be tricky but we should at least be able to look at Canada. It's a fairly large industrialized country. Not that different economically. Very similar culturally with a comparable K through 12 educational system that has to deal with English as a second language (huge immigrant population), relies on roughly the same type of teacher training/certification that we use and continues to pull teachers in with promises of good job security.

In terms of this discussion, the biggest difference between the two countries could well be Canada's somewhat reactionary approach to reform (for example, only one province, Alberta, allows public charter schools). With such limited school choice and no real attempt to clean out the deadwood from behind the podium, the Canadian educational system looks a lot like the American system before the reform movement.

And how is Canada doing on the PISA math test?

From Measuring up : Canadian Results of the OECD PISA Study:

One way to summarize student performance and to compare the relative standing of countries is by examining their average test scores. However, simply ranking countries based on their average scores can be misleading because there is a margin of error associated with each score. As discussed in Chapter 1, when interpreting average performances, only those differences between countries that are statistically significant should be taken into account. Table 2.1 shows the countries that performed significantly better than or the same as Canada in reading and mathematics. The averages of the students in all of the remaining countries were significantly below those of Canada. Overall, Canadian students performed well. Among the countries that participated in PISA 2006, only Korea, Finland and Hong Kong-China performed better than Canada in reading and mathematics. Additionally Chinese Taipei performed better than Canada in mathematics.
That puts them in the top ten (in science they were in the top three). Now let's review the United States' performance (quoting Dr. Fisman):
Despite nearly doubling per capita spending on education over the past few decades, American 15-year olds fared dismally in standardized math tests given in 2000, placing 18th out of 27 member countries in the Organization for Economic Co-operation and Development. Six years later, the U.S. had slipped to 25th out of 30.
How do we reconcile these facts with Dr. Fisman's argument? As far as I can see, there are only four possibilities (if I've missing some please click the comment button and let me know):

1. Though PISA is a useful test, international PISA ranking may not be a sufficient measure of a country's school system;

2. Teacher quality is not a major driver of national educational performance;*

3. Teachers are made, not born. i.e. it is possible to train people to be good teachers;

4. Canada just got lucky and beat the odds hundreds of thousands of times.

If this were a PISA question, I hope no one would pick number four.



* This is really is a topic for another post, but I would expect the administrator effect to overwhelm the teacher effect. Perhaps Dr. Fisman is going to follow up with a Slate article on firing administrators who produce lackluster test performance.

Small Schools

I often disagree with Alex Tabarrok on education; I think we are both arguing for a better world but we have somewhat different ideas as to the best approach. But his article on small schools is really worth reading. Heck, every epidemiology student should read the article to remind themselves of the hazards of trying to interpret the ranks in a population without also interpreting the level of variance.

Very well done.

Wednesday, September 1, 2010

Rubin on Educational Testing

From a Daily Kos post about the use of Value-Added Assessment methodologies:

In 2004, Donald Rubin opined

We do not think that their analyses are estimating causal quantities, except under extreme and unrealistic assumptions.


Now I am not familiar with the actual research, but I am likely to take Donald Rubin seriously. Not only is he one of the founders of causal inference, multiple imputation, and propensity scores, but he has a long history of tackling extremely difficult epidemiological problems. For a humbling experience (for those of us in biomedicine) his CV is here.

I dislike appeals to authority, in general, but claims that researchers skeptical about the value of these testing methods are misinformed seem to be poorly grounded. I don’t want to say Rubin is right about everything but I do think we should take his concerns seriously.

[as a side note, he was also the PhD supervisor of Andrew Gelman, whose blog is worth following]

Econned

It is the start of the school year and time to read something non-epidemiological or statistical. So, being me, I decided to read Yves Smith's new book Econned. I'll let you know what I think but reading the introduction this morning suggests that the book is off to a strong start. The best quote so far:

Theories that fly in the face of reality often need to excise inconvenient phenomena, and mainstream economics is no exception.


This quote reminds me of Karl Popper's thinking; one often learn more based on what does not fit your theory then from what does (i.e. falsification). This principle is hard to follow in very complex fields (like economics and epidemiology) where you are guaranteed to have at least some mismatches and disconfirming evidence for everything. But it is good to cultivate a sense of humility about our models!

Tuesday, August 31, 2010

Measurement part 2

From Matt Yglesias:

On a recent evaluation, her principal, Oliver Ramirez, checked off all the appropriate boxes, Tan said — then noted that she had been late to pick up her students from recess three times.

“I threw it away because I got upset,” Tan said. “Why don’t you focus on my teaching?! Why don’t you focus on where my students are?”


Matt argues that proponents of teacher effectiveness are misunderstanding their critics:

The idea has gotten out there that proponents of measuring and rewarding high-quality teaching are somehow engaged in “teacher-bashing.” I think that’s one part bad faith on the part of our antagonists, one part misunderstanding on the part of people who don’t follow the issue closely, and at least one part our own fault for focusing too much on the negative.


But I think his own example is showing why skepticism persists. It's easy to measure the wrong things, incentive the wrong behavior and do a fair amount of damage to a system. What I would really like to see is an argument for incremental change and experimentation rather than radical reform driven by standardized tests. Or, if we must use some sort of standardized test approach, I’d like to have some better evidence that these tests are designed to measure teacher effectiveness and do not omit important elements. For example, I think clear and interesting writing is hard to do (as readers of this blog may notice when they try and follow my words) and it is very hard to objectively score. Multiple choice questions on word definitions are much easier to do but, perhaps, may not measure the most important skills we want to teach.

Certainly something to ponder.

Are you measuring the right thing?

Statistics is a fantastic tool and capable of creating enormous advances. However, it remains the slave of the data that you have. The worst case scenario is when the "objective metric" is actually measuring the wrong thing. Consider customer service in Modern America:

Modern businesses do best at improving their performance when they can use scalable technologies that increase efficiency and drive down cost. But customer service isn’t scalable in the same way; it tends to require lots of time and one-on-one attention. Even when businesses try to improve service, they often fail. They carefully monitor call centers to see how long calls last, how long workers are sitting at their desks, and so on. But none of this has much to do with actually helping customers, so companies end up thinking that their efforts are adding up to a much better job than they really do. In a recent survey of more than three hundred big companies a few years ago, eighty per cent described themselves as delivering “superior” service, but consumers put that figure at just eight per cent.


Here, the core issue seems to be that measuring efficiency at delivering customer service is not the same thing as having good outcomes. Having done statistics for a call center, I can assure you that they are obsessive about everything that can be measured. But satisfaction is a hard thing to measure and it most assuredly matters.

This analogy is why I am concerned with the use of standardized tests for measuring educational achievement. It is possible that they are capturing only part of a complex process and that the result of focusing on them could be fairly poor. After all, companies have tried to deliver exceptional customer service via call center for a couple of decades now and the results do not appear to be a uniform consensus that customer service is a delightful experience.

It is not that these processes can't be evaluated. It's just that the success of education or customer service may depend on things that are hard to measure. If we only measure those features that are easy to measure we may end up wondering why education is in decline despite a steady improvement in the key metrics we use to evaluate it.

Joel Klein's Record

From Mark Gimein (via Felix Salmon), here's a well-timed story from New York Magazine:
New York City public-school kids may be dreading the end of summer, but schools chancellor Joel Klein is the one who’ll really be tested when classes begin again. Last spring, Klein was bragging about the extraordinary upswing in scores during his tenure: a 31-point rise in the percentage of students who passed state reading tests, a 41-point increase in math. That was before state authorities admitted that they’d been progressively more lenient in scoring the tests, and decided to grade more strictly.

The new stringency resulted in the elimination of most of the miraculous gains of the Bloomberg years, and an administration that had lived by the numbers is getting clobbered by them. Klein told parents that the state “now holds students to a considerably higher bar.” This would make sense only if the state hadn’t previously been lowering that bar.

Last year, NYU professor and Klein antagonist Diane Ravitch said exactly that in a Times op-ed, an assertion that Klein claimed was “without evidence.” But the fact that New York students’ scores on the National Assessment of Educational Progress had moved only marginally, even as state scores skyrocketed, was manifest then and is inescapable now.
As discussed here, we've seen Klein omitting relevant statistics before.

Monday, August 30, 2010

Sentences to ponder

We always talk about a model being "useful" but the concept is hard to quantify.


-- Andrew Gelman

This really does match my experience. We talk about the idea that "all models are wrong but some models are useful" all of the time in Epidemiology. But it's rather tricky to actually define this quantity of "useful" in a rigorous way.

Is Ray Fisman one of the best and the brightest?

Based on some of the feedback to my past few posts on Ray Fisman's recent Slate article, there's a point I should probably make explicit: given all available evidence including reliable first-hand accounts, Ray Fisman is an accomplished researcher and a good guy. Furthermore, I am working under the assumption that, like most people in the reform movement, Dr. Fisman is motivated by a deep concern about the state of education and a genuine desire to improve it. (I have also found this a safe assumption when dealing with the vast majority of the teachers Dr. Fisman would fire.)

I have singled out Dr. Fisman not because "Clean Out Your Desk" was exceptionally bad but because it was exceptionally representative. If this were an anomaly written by someone who was stupid or incompetent or had a grudge against teachers, it wouldn't be worth anyone's time to discuss, let alone exhaustively rebut it. This is something more disturbing.

David Warsh has drawn a relevant analogy between the reform movement and the run-up to Vietnam:
Remember the recipe for a policy disaster? Start with a handful of policy intellectuals confronting a stubborn problem, in love with a Big Idea. Fold in a bunch of ambitious Ivy League kids who don’t speak the local language. Churn up enthusiasm for the program in the gullible national press – and get ready for a decade of really bad news. Take a look at David Halberstam’s Vietnam classic The Best and the Brightest, if you need to refresh your memory. Or just think back on the run-up to the war in Iraq.
The education reform is filled with smart, well-intentioned people like Ray Fisman. Under the circumstances, that doesn't provide much comfort.

Zero Tolerance

A timely post from Matt Yglesias:

The commonplace scenario in the United States when people decide to “get tough” and implement a policy of “zero tolerance” for infractions of the rules is to in practice tolerate the majority of infractions by not catching perpetrators and then hit a minority of violators with extremely harsh sanctions. For years now, Mark Kleiman has been pushing the reverse approach—make sanctions relative mild, but make them swift and nearly certain.


The results were compelling:

Now the results are in: drunk–driving fatalities fell from twice the national average, 70, in 2006 to just 34 in 2008, the most recent year for which data are available


It is a key element of public health policy to try and find ways to handle behaviors that involve both a health issue (like addiction to alcohol) and a negative externality (like hitting people with cars). It is really interesting to see researches being done on what approaches are actually the most effective. This type of research is important stuff and has some pretty interesting ramifications for improving public health in a wide range of circumstances.