Monday, February 14, 2011

Concerns with data driven reform

Dead Dad has a post on Achieving the Dream, which is intended to improve outcomes at community colleges. Two of his commentators had really interesting insights. Consider mathguy:

Consider the effect of No Child Left Behind. I've seen a noticeable decline in basic math skills of students of all levels in the last 5 years. Every year, I will discovered a new deficiency that was not seen from the previous years (we are talking about Calculus students not able to add fractions). Yet NCLB was assumed to be "working" since the scores were going up. It seems that K-12 was devoting too much time preparing the students for tests, at the cost of killing students' interest in math, trading quality instruction for test-taking skills. Is NCLB a factor in the study? Are socio-economic factors examined in the study?


or CC Physicist who stated:

I look at what Asst Prof wrote as an indication that a Dean, chair, and mentor didn't do a good job of getting across the history of assessment. Do you know what "Quality Improvement" program was developed a decade earlier, and what the results were of the outcomes assessment required from that round of reaffirmation of accreditation? Probably not, since we have pretty good communication at our CC but all the negative results from our plan were swept under the rug. The only indication we had that they weren't working was the silent phase-out of parts of that plan. Similarly, data that drove what we did a decade ago were not updated to see what has changed.


I think these two statements capture, very nicely, the main issue I have with the current round of educational reform. One, if you make meeting a specific metric (as a measure of on underlying goal) a high enough priority then people will focus on the metric and not the actual goal. After all, if you don’t then your name could be posted in LA Times although with your underperformance on the stated metric. So we’d better be sure that the metric that we are using is very robust in its relation to the underlying goal. In other words, that it is a very good representation of the curriculum that we want to see taught and measures the skills we want to see students acquire.

Two, trust in evidence based reform requires people to be able to believe the data. This is one area where medical research is leaps and bounds ahead of educational research. A series of small experiments are attempted (often randomized controlled trials) while the standard of care continues to be used in routine patient care. Only when the intervention shows evidence of effectiveness in the trial environment is it translated into routine care.

In education, such trials are rare indeed. Let us exclude natural experiments for the moment; if we care enough to change the education policy of a country and to violate employment contracts then it’s fair to hold ourselves to a high standard of evidence. After all, the lotteries (for example) are not a true experiment and it’s hard to be sure that the lottery itself is completely randomized.

The problem is that educational reforms look like “doing something”. But what happens if the reforms are either counterproductive or ineffective (and implanting an expensive reform that does nothing has a high opportunity cost). The people implementing the reforms are often gone in five to ten years but the teachers (at least now while they have job security) remain to clean up the wreckage afterwards.

I think that this links well to Mark's point about meta-work: it's hard to evaluate the contributions of meta-work so it may look like an administrator is doing a lot when actually they are just draining resources away from the core functions of teaching.

So when Dead Dad notes: “Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success.” Why can’t we use this evidence to decide that the current set of educational reform ideas aren’t necessarily working well? Why do we take weak evidence of the decline of American education at face value and ignore strong evidence of repeated failure in the current reform fads?

Or is evidence only useful when it confirms our pre-conceptions?

2 comments:

  1. These general points are certainly valid. I would not, however, draw too much from the Achieving the Dream example. If you check out this article (http://www.insidehighered.com/news/2011/02/10/five_years_of_achieving_the_dream_in_community_colleges), you'll probably be less than impressed with the actual project.

    ReplyDelete
  2. Ironically, I think a badly designed project is better evidence than a well designed one. Meta-work is often poorly thought out and the institutional resistance to it by those doing work may be partially explained by their concerns with the program itself. After all, a new programs needs to be "given time to work"; by the time failure is evident there might be a new focus or a new administrator.

    ReplyDelete