Previously on OE, we've talked about how norming, placebo and volunteer effects can cause methods like lottery-based analyses to miss significant sampling effects. Here's one more area of potential concern.

The basic problem here is familiar to clinical researchers. You're trying to determine the optimal level of a drug or therapy or exercise regime or type of food. There is an ideal dosage, the level of treatment you would prescribe if your only concern was optimizing the effect. Then there is what we might call a realistic dosage, one that takes into account factors like side effects and compliance. These levels vary from group to group which means the process of finding the right dosage is sensitive to sampling issues. The appropriate level is likely to be different for a group of college students and a group of senior citizens.

Teachers assigning homework face essentially the same problems. Assuming that goal is to improve students' score on a standardized test, there is an ideal homework assignment for each lesson, a subset of the problems available in the text that will tend to maximize the average score of a class (to keep things simple we'll limit the discussion, with no loss of generality, to homework assignments derived from the textbook).

The ideal assignment assumes that students have unlimited time and will complete all the assigned problems. The realistic assignment would take into account factors like time constraints, demands from other classes, compliance, burnout and parental pushback (I can tell you from experience that parents complain if they feel their children are being given excessive work and I would argue this is a good thing -- teachers should remember that their students' time also has value). The realistic assignment would optimize the class's average score when these constraints are in place.

Like the previously discussed norming, placebo and volunteer effects, a sampling/treatment level interaction with the homework assignment here can interfere with the ability of lottery-based analysis to detect sampling effects.

To see how this would work, consider a charter school like one in the KIPP system that very publicly acknowledges that students will be expected to do large amounts of homework each night (some schools even require parents to sign contracts agreeing to sign off on the students' homework). Students who apply for this school are aware of these requirements, as are their parents. This certainly raises the possibility that the optimal realistic homework level might be higher for these students, particularly if this is one of those schools that aggressively culls out the students who can't handle the workload.

If this is the case and if both the charter school and the public schools assign the optimal levels of homework for every class, you will have a sample based effect that will completely evade detection by a lottery-based analysis. That analysis will compare the performance of those who applied for the charter and were accepted against those who applied but lost the lottery. Since the rejected students will receive a treatment level that was optimized for the general population while the accepted students will receive a treatment level optimized to their particular subgroup, we expect the charter school students to do better, leading the lottery-based analysis to incorrectly conclude that there is no selection effect.

I used homework in this post to keep things simple but the principle applies just as well to most elements of the educational models of many highly praised charter schools -- longer days, Saturday instruction, extended school years, holding students responsible for more material on tests. Put bluntly, these schools get their results by giving students lots of work. Because of these results, these schools are often held up as examples for the rest of the educational system, but when you start with a self-selected sample then counsel out the students in that sample who have trouble keeping up, you should at least consider the possibility that the optimal workload for your student body is higher than the optimal workload for the general populace.

This doesn't mean that all high performing charter schools are simply running on undetected selection effects and it certainly doesn't mean that charter schools with high standards should lower them. What is it does mean is that selection effects in education show up in subtle and complex ways. It remains extraordinarily difficult to resolve these issues through observational means. Any study that claims to have settled them should be approached with great caution.

Prospect Theory & Populism

1 hour ago

Thanks for the post.

ReplyDeleteYou say that,

”Since the rejected students will receive a treatment level that was optimized for the general population while the accepted students will receive a treatment level optimized to their particular subgroup, we expect the charter school students to do better, leading the lottery-based analysis to incorrectly conclude that there is no selection effect.”

But isn’t the difference in homework assignment just one of many aspects of the “treatment” offered by the two alternative schools that comprise the difference under study.

Your point does, of course (and as you’ve argued in other posts), go to the question of whether such a “homework-heavy” model has positive effects that could be generalized to any population beyond the scholarship applicants.

Best,

Jim Manzi

Jim,

ReplyDelete"But isn’t the difference in homework assignment just one of many aspects of the “treatment” offered by the two alternative schools that comprise the difference under study?"

Yes and no. The issue here is that certain treatments (or treatment levels) are only viable for certain relatively homogeneous sub-populations. This raises questions not only about generalizability but about the use of lottery-based analyses in these cases. If we assume that each school gives optimal homework for its student body (a huge 'if' but we're being hypothetical here), then the standard analysis would put this down as a difference between schools not students, even though switching samples would also switch the two schools' performance.

To invoke a frequently-made point, in order for a lottery-based analysis to be really reliable, you need to keep the lottery losers in their own classes and not mix them in with the general population.

Thanks for the comment,

Mark

I think the worry about imperfect randomization is even worse; only a tiny amount of cherry picking of "star students" could have a massive influence on the difference in the mean between the two groups.

ReplyDelete