One piece that I do think is worth reflecting on is this one:
Their first example of a “mistake” concerns a May, 2005, Slate column we wrote about the economist Emily Oster’s research on the “missing women” phenomenon in Asia. Her paper, “Hepatitis B and the Case of the Missing Women,” was about to be published in the Aug. 2005 issue of the Journal of Political Economy. At the time, Levitt was the editor of JPE, and Oster’s paper had been favorably peer-reviewed.
Oster argued that women with Hepatitis B tend to give birth to many more boys than girls; therefore, a significant number of the approximately 100 million missing females might have been lost due to this virus rather than the previously argued explanations that included female infanticide and sex-selective mistreatment.
Other scholars, however, countered that Oster’s conclusion was faulty. Indeed, it turned out they were right, and she was wrong. Oster did what an academic (or anyone) should do when presented with a possible error: she investigated, considered the new evidence, and corrected her earlier argument. Her follow-up paper was called “Hepatitis B Does Not Explain Male-Biased Sex Ratios in China.”I think that this missed the point of what was causing concern with this article. An economist wanders into public health and overturns the conventional wisdom completely by considering a possible predictor but not really understanding why epidemiologists had not considered a disease-based explanation before. It should not be considered a small point that the article showed up in an economics journal and not in a journal where it would be reviewed by experts in the clinical area.
Is this necessary wrong to have reported potentially exciting new results? No. It is also true that people did put the effort into reporting when the understanding changed. But this was in a well developed area of public health with very high policy stakes and people willing to put in a lot of effort to understand if there could be an alternate explanation. So it induces some skepticism about "counter-intuitive" claims in areas where there are not the resources to scrutinize these claims deeply.
Now it is natural that research has an error rate. I wish it did not (especially not my research). But it does point out the hazards of popularizing prelimary results. I think I am especially sensitive to this issue as no field is more guilty of alarming and counter-intuitive findings than pharmacoepidemiology. So I look for clues that make me cautious about publicizing preliminary results before they are really ready for prime time.
Felix Salmon was not happy with Dubner on this one:
ReplyDeleteTo be clear: Gelman and Fung accused Dubner of some slightly intellectually-dishonest practices. And in his self-defense, Dubner engages in some of the most egregious and blatant intellectual dishonesty I’ve ever seen on a blog.
http://blogs.reuters.com/felix-salmon/2012/03/21/annals-of-dishonest-attacks-stephen-dubner-edition/
I would be cautious when hearing or reading a phrase like:
ReplyDelete"some of the most egregious and blatant".
Once, a superlative is introduced, you usually see logic sneaking out the backdoor to let the sandthrower do his thing.