(As I mentioned before, there is reason to believe that this research is biased in favor of charter schools.)
But for all their support and cultural cachet, the majority of the 5,000 or so charter schools nationwide appear to be no better, and in many cases worse, than local public schools when measured by achievement on standardized tests, according to experts citing years of research. Last year one of the most comprehensive studies, by researchers from Stanford University, found that fewer than one-fifth of charter schools nationally offered a better education than comparable local schools, almost half offered an equivalent education and more than a third, 37 percent, were “significantly worse.”
Although “charter schools have become a rallying cry for education reformers,” the report, by the Center for Research on Education Outcomes, warned, “this study reveals in unmistakable terms that, in the aggregate, charter students are not faring as well” as students in traditional schools.
For a variety of reasons, analyses of VAM [Value Added Modeling] results have led researchers to doubt whether the methodology can accurately identify more and less effective teachers. VAM estimates have proven to be unstable across statistical models, years, and classes that teachers teach. One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%. Another found that teachers’ effectiveness ratings in one year could only predict from 4% to 16% of the variation in such ratings in the following year. Thus, a teacher who appears to be very ineffective in one year might have a dramatically different result the following year. The same dramatic fluctuations were found for teachers ranked at the bottom in the first year of analysis. This runs counter to most people’s notions that the true quality of a teacher is likely to change very little over time and raises questions about whether what is measured is largely a “teacher effect” or the effect of a wide variety of other factors.
A study designed to test this question used VAM methods to assign effects to teachers after controlling for other factors, but applied the model backwards to see if credible results were obtained. Surprisingly, it found that students’ fifth grade teachers were good predictors of their fourth grade test scores. Inasmuch as a student’s later fifth grade teacher cannot possibly have influenced that student’s fourth grade performance, this curious result can only mean that VAM results are based on factors other than teachers’ actual effectiveness.
Firing under-performing teachers
If new laws or policies specifically require that teachers be fired if their students’ test scores do not rise by a certain amount, then more teachers might well be terminated than is now the case. But there is not strong evidence to indicate either that the departing teachers would actually be the weakest teachers, or that the departing teachers would be replaced by more effective ones.
The study was conducted by the National Center on Performance Incentives at Vanderbilt. The center, which takes no advocacy position on the issue, was created at the university's highly regarded Peabody College of Education and Human Development in 2006 with a $10 million federal research grant.
In a three-year experiment funded by the federal grant and aided by the Rand Corp., researchers tracked what happened in Nashville schools when math teachers in grades 5 through 8 were offered bonuses of $5,000, $10,000 and $15,000 for hitting annual test-score targets. About 300 teachers volunteered. Researchers randomly assigned half of the participants to a control group ineligible for the bonuses and the other half to an experimental group that could receive bonuses if their students reached certain benchmarks.
Researchers designed the bonuses to be large enough to function as a legitimate incentive for teachers whose average salary, according to a union official, is between $40,000 and $50,000. There were no additional variables in the experiment: no professional development, mentoring or other elements meant to affect test scores. The bonuses, totaling nearly $1.3 million, were funded by businessman Orrin Ingram, according to news reports. A university spokeswoman said Tuesday evening that she could not confirm those reports, and Ingram could not be reached for comment.
On the whole, researchers found no significant difference between the test results from classes led by teachers eligible for bonuses and those led by teachers who were ineligible. Bonuses appeared to have some positive effect in the fifth grade, researchers said, but they discounted that finding in part because the difference faded out when students moved to the sixth grade.
Just for the record, I believe that charter schools, increased use of metrics, merit pay and a streamlined process for dismissing bad teachers do have a place in education, but all of these things can more harm than good if badly implemented and, given the current state of the reform movement, badly implemented is pretty much the upper bound.
I think, in my view, this goes to the question of "crisis". It['s fine to have incremental (evidence-based) reform and to have debates on values (do we prefer charter schools from a philosophical perspective).ReplyDelete
But it makes the villification of the opponents of reform somewhat less tenable.
I don't know much about all the models used in VAM, but if one fits a hierarchical model to multiple years, the teacher effects will get shrunken towards the grand mean based on within- and between- variation. The estimates for each teacher would thus be much more stable (bias & variance trade-off). I'd be interested to see more about this here.ReplyDelete
@Dean, I think you are right that multiple years will improve the estimate of the VAM. The issues that I worry about is whether the bias induced by "gaming tests" can be of the same order as the main effect of teacher performance. I also worry whether about how well they measure things like writing skills (as compared with maths where I am a lot more optimistic).ReplyDelete
The really important finding of the Stanford charter school study was that “students do better in charter schools over time. First year charter students on average experience a decline in learning, which may reflect a combination of mobility effects and the experience of a charter school in its early years. Second and third years in charter schools see a significant reversal to positive gains.”ReplyDelete
In other words, the reason for the overall average findings is that first-year charter students are bringing down the average. Well, this is no surprise: students who transfer between schools always tend to do worse, just because they are getting adjusted to a different school and possibly an entirely different curriculum and set of expectations. But if students do better the longer they remain in charter schools, then that argues for encouraging more students to remain in charters for longer times.
The policy implications are exactly opposite to what some people seem to think.
I'm not sure why the generally pro-school choice would choose to bury the lede but isn't that consistent with the previously discussed selection and peer effects?
It's consistent with any number of stories, including increased quality of teaching, better curriculum, finding a better fit for each individual students (some do better in a smaller school, for example), and the factors that you mention.ReplyDelete
There are at least three types of people who choose charter schools:
1. Students who are motivated to seek academic success, but who aren't satisfied with the low quality of their traditional public school, and who seek out perceived better quality elsewhere.
2. Students who just want something different and more to their tastes -- an arts-based or science-based curriculum, or a smaller school, or joining a good friend, or any number of other things.
3. Students who just aren't doing very well or who are falling into the wrong crowd, and whose parents think that maybe their child will somehow improve somewhere else.
If the question is whether charter school performance is better than traditional public school performance, students from category 1 would be a charter school advantage, category 2 would probably be neutral, and category 3 would be a charter school DISadvantage.
I don't know why some people (such as Diane Ravitch) act as if all or most charter school students are in category 1. There's zero evidence for that. To the contrary, a RAND study last year found that in most locations nationwide, charter school students are entering with the same or lower test scores than their peers. This suggests to me that those students on average probably aren't coming from highly motivated successful families -- or if they are, the supposed benefits of motivation aren't that powerful after all (not powerful enough to make their test scores higher than their public school peers).