Instead, I’m a fan of the “few, big, dumb questions” approach. At the end of a program, can students do what they’re supposed to be able to do? How do you know? Where they’re falling short, what are you planning to do about it? Notice that the unit of analysis is the program. For assessment to work, it can’t be another way of doing personnel evaluations. And it can’t rely on faculty self-reporting. The temptation to game the system is too powerful; over time, those who cheat would be rewarded and those who tell the truth would be punished. That’s a recipe for entropy.I very much agree with this point of view. It is hard enough to measure on thing well -- when you try to do both personal evaluation and program evaluation using a single measure then you have problems. If you make the tests "high stakes" for one party and "low stakes" for the other then interests are misaligned.
Everybody has an interest in how many students from a particular program pass the actuarial exams, because both the students and the faculty want the same thing (people to pass). This has made these tests look good as tools for evaluating both students and programs (similar value can be found in any licensing exam).
But to pull apart "the course is poor" and "the instructor is poor" is a very hard thing to do, like with all correlated variables. And it presumes that the largest effect size is the teacher, which may be true if the teacher is extremely poor. But like a lot of tasks that improve with practice, I suspect teaching ability will end up being a second order effect for experienced teachers.
Post a Comment