When I told my father that I was sending my work saying car seats are not that effective to medical journals, he laughed and said they would never publish it because of the result, no matter how well done the analysis was. (As is so often the case, he was right, and I eventually published it in an economics journal.)Now compare his article to this one (published a year later):
OBJECTIVE: The objective of this study was to provide an updated estimate of the effectiveness of belt-positioning booster (BPB) seats compared with seat belts alone in reducing the risk for injury for children aged 4 to 8 years. METHODS: Data were collected from a longitudinal study of children who were involved in crashes in 16 states and the District of Columbia from December 1, 1998, to November 30, 2007, with data collected via insurance claims records and a validated telephone survey. The study sample included children who were aged 4 to 8 years, seated in the rear rows of the vehicle, and restrained by either a seat belt or a BPB seat. Multivariable logistic regression was used to determine the odds of injury for those in BPB seats versus those in seat belts. Effects of crash direction and booster seat type were also explored. RESULTS: Complete interview data were obtained on 7151 children in 6591 crashes representing an estimated 120646 children in 116503 crashes in the study population. The adjusted relative risk for injury to children in BPB seats compared with those in seat belts was 0.55. CONCLUSIONS: This study reconfirms previous reports that BPB seats reduce the risk for injury in children aged 4 through 8 years. On the basis of these analyses, parents, pediatricians, and health educators should continue to recommend as best practice the use of BPB seats once a child outgrows a harness-based child restraint until he or she is at least 8 years of age.So what is different? Well, the complete interview data is a hint as to what could be happening differently. It is very hard to publish a paper in medical journal using weaker data than that present elsewhere. Even more interestingly, papers before this one found protective associations (this was 2006) which should also be concerning.
Then we notice that the Doyle and Levitt has Elliott et al. as a reference, but still claim that they are the first to consider this issue:
This study provides the first analysis of the relative effectiveness of seat belts and child safety seats in preventing injury based on representative samples of police-reported crash data.So now let us consider reasons that a medical journal may have had issues with this paper. First, it does not seem to deal with the previous literature well. Second, it doesn't explain why crash testing results do not seem to translate into actual reduction in events. It might be due to misuse of the equipment, but it is not clear to me what the conclusion should be then.
But it seems that jumping to the conclusion that the paper would not be published because of the conclusion seems to assume facts not in evidence. It is common for people to jump fields and apply the tools that they have learned in their discipline (economics) and not necessarily think about the issues that obsess people in the field (public health). Some times this can be a good thing and a new perspective can be a breath of fresh air. But in a mature field it can also be the case that there is a good reason that the current researchers focus on the points that they do.
This reminds me of Emily Oster, another economist who wandered into public health and seemed surprised at the resistance than she encountered.
So is the explanation Levitt's father gave possible? Yes. But far more likely was the difficulty of jumping into a field with a high counter-intuitive claim and hoping for an immediate high impact publication. Medical journals are used to seeing experiments (randomized controlled drug trials, for example) overturn otherwise compelling observational data. So it isn't a mystery why the paper had trouble with reviewers and it does not require any conspiracy theories about public health researchers not being open to new ideas or to data.