Wednesday, July 6, 2022

When optimal is suboptimal

Whenever a metric maxes out it creates problems. Back in my teaching days, I used to try in vain to explain to colleagues and particularly to administrators that a test where anyone, let alone numerous students, made 100% was a bad test. A "perfect" score meant you didn't actually know how well that student did. Did they just barely make that hundred or could they have aced a much more difficult test? 

Worse yet, if more than one student make 100%, we have no way of ranking them. When we start calculating final grades and averaging these tests, we invariably give the same amount of credit to two students who had substantially different levels of mastery.

This was especially problematic given the push at one big suburban district (not coincidentally my worst teaching experience) to define an A at 93 and above rather than grading on some kind of a curve. Since there was little standardization on the writing of the tests for the most part and arguably even less in the grading of any even slightly open-ended questions, the set cut-off made absolutely no sense. Trying to tweak the difficulty level of the test so that the students doing A level work fell within that eight-point range was nearly impossible and pretty much required writing exams where the top of the class was likely to max out the instrument.

This Mitchell and Webb radio sketch looks at the same underlying question from a different angle and while I would probably argue that the one year interval is a bit short, I can't entirely dispute the logic. While this specific example might make people a bit uneasy, substitute in zero shoplifting and the reasoning would actually be fairly sound.

No comments:

Post a Comment