Monday, November 1, 2010

Placebos

This is relevant to Mark's posts on how comparisons between students who win and lose charter school lotteries are similar to open label drug trials. Andrew Gelman quotes Kaiser Fung stating that (in drug trials) the standard comparison is:

Effect on treatment group = Effect of the drug + effect of belief in being treated
Effect on placebo group = Effect of belief in being treated


I think that the mathematical formulation of Mark's concern about using charter school lotteries as follows:

Effect in charter school group = Effect of the charter school intervention + effect of belief in being treated + baseline

Effect on students in standard schools = baseline - (any discouragement effect from losing the lottery)

So if the effect in the charter school students is greater than that in students who lose the lottery and are placed in standard schools, that skill doesn't separate the two sources of variation (placebo versus innate effect of the intervention).

The real test would be to introduce some sort of placebo intervention (a charter school that used standard educational approaches??). But that is not an easy thing to accomplish, for obvious reasons. I suspect that this is why psychologists end up having to use deception as part of their studies, despite the ethical quagmires this produces.

UPDATE: missing link fixed

7 comments:

  1. The encouragement/discouragement effect also has troubling implications for lottery based analyses.

    ReplyDelete
  2. Agreed. I was just trying to summarize the issue with equations to make it clear that the "intervention effect" is not clean in these studies but aliased with other effects (which could dominate the results).

    ReplyDelete
  3. On deception, yes we psychologists tend to lie a lot. That being said, there's a researcher called Frank Miller who argues that a procedure of authorised deception, where participants are informed on consent forms that they will be deceived is more ethically acceptable.

    Martin and Katz 2010 did a placebo analgesia study where the inclusion of this procedure did not effect the size of the effect. It may be the best way to go about this sort of research in education.

    ReplyDelete
  4. @Disgruntled PhD: Yes, I suspect that you are correct. It'd be hard to set up but a properly controlled experiment would be extremely helpful in the education discussion.

    ReplyDelete
  5. Regarding medical research at least, I've always thought that this whole placebo-effect preoccupation mostly applies to explanatory trials - i.e. those where the purpose is to learn about the actual biological/pharmacological effect of a particular agent.

    However, when it comes to pragmatic trials (where the purpose is to learn about the effect of an intervention as a whole, with all potential mechanisms combined) placebo effect becomes part of the overall intervention effect so the appropriate control group should not entail a placebo.

    Thus, the issue boils down to which exactly causal contrast one is interested in: with placebo effect excluded from the contrast or built-in? - Is one interested in the effect of a particular agent present vs absent, or in the effect of the whole intervention vs nothing?

    Now, I know nothing about psychology and education studies so I don't know if they have the same explanatory/pragmatic duality as in medical intervention research so it's not clear how all this applies to education studies, where not only is the concept of biological/pharmacological effect not really there to begin with, but moreover the agent/intervention at issue is intrinsically about cognition/mind so it's possible that separation of placebo effect from the "genuine effect of the intervention" (whatever that's supposed to mean) is not only practically unfeasible but also a theoretically questionable idea in general - what I mean is that it's possible that the placebo effect is, conceptually, inherently part of the intervention (by virtue of the intervention being cognitive by its very definition) so maybe I shouldn't even dream about attempting to separate them.

    Another point: in non-experimental, pharmacopepidemiologic research, placebo effect appears to have been practically a non-topic - at least it's not dealt with or discussed in explicit terms, I think. Sure, occasionally placebo effect will get to be (at least partly) removed from the effect of a medication under study if a reference category other than mere non-use of this medication is used (e.g. some other medication that is known not to have the effect under investigation) but I'm not sure it's for the purpose of getting rid of placebo effect per se that such a contrast would be used - rather, I suppose it would be done in attempt to minimize confounding and differential information bias, perhaps.

    ReplyDelete
  6. reply to Igor:

    "[I]t's possible that the placebo effect is, conceptually, inherently part of the intervention (by virtue of the intervention being cognitive by its very definition) so maybe I shouldn't even dream about attempting to separate them."

    But we are in the process of spending billions of dollars, making radical changes in curriculum and instruction and trying to fire thousands of people based on the assumption that we CAN separate them, that having the same intervention only without the placebo effects (and selection effects and peer effects and volunteer effects and Hawthorne...) will have the same results.

    ReplyDelete
  7. "Thus, the issue boils down to which exactly causal contrast one is interested in: with placebo effect excluded from the contrast or built-in? - Is one interested in the effect of a particular agent present vs absent, or in the effect of the whole intervention vs nothing?"

    I think the issue is whether the "placebo effect" would persist if the intervention become standard. If not, and if it were positive and large, then we could always find superiority with an experimental approach versus a passive control.

    In pharmacoepidemiology, for many problems, the placebo effect is thought to be small. In some areas (depression, mild to moderate pain) it may not be and that causes no end of angst.

    If the effect is small then it can be neglected. The question with education is whether it is negligible or not (and I don;t know of good data on this question).

    Or that would be my thinking . . .

    ReplyDelete