Sunday, February 24, 2013

Value added testing

This article is worth reading throughout.  However, the most relevant passage is:

Because student performance on the state ELA and math tests is used to calculate scores on the Teacher Data Reports, the tests are high-stakes for teachers; and because New York City uses a similar statistical strategy to rank schools, they are high-stakes for schools as well. But the tests are not high-stakes for the eighth-graders at Anderson. By the time they take the eighth-grade tests in the spring of the year, they already know which high school they will be attending, and their scores on the test have no consequences.
Seriously, can no student of incentives not see how this could go terribly, terribly wrong?  The students could, for example, decide to blow the test because the teacher was overly rigorous. 

This is why I am less skeptical about metrics like the SAT.  It is still flawed as all observational research is tricky to derived unbalanced estimates from.  But in this case, the students, the teacher and the school all have something at stake in their performance on the exam. 

No comments:

Post a Comment