I am amazed that Mark was not all over this statistical modeling issue. Matthew Yglesias posts the response from the authors here.
A couple of quick thoughts. First, no matter what you think of the authors of the original paper, I am inclined to applaud them for sharing both the source data and the statistical code used to analyze it. It opened to them a comprehensive methodological critique but that could not have happened if they had not shared the source data and code. The next result, from a scientific point of view, is a huge win (and I think this outweighs any actual errors, presuming that they were not intentional).
Second, the whole issue of association versus causality is one that we should think more about. It can be easy to slip in the fallacy of equivocation: asserting associational thinking when a study is attacked but interpreting it in a very causal manner in a policy discussion. We have this problem all of the time. We state associations like "people who eat red meat have a higher rate of disease" but then turn that into a recommendation of "eat less red meat".
Now clearly one has to make an actual decision at some point and, in a lot of cases, the guidance can cause no harm. But there are complex cases where this could matter a lot. Consider the inverse association between smoking and Parkinson's disease. Imagine (based on no data -- just a though experiment) that we extended this association to show slower progression in patients who continued to smoke. That might make one think that cigarette smoking might be recommended to Parkinson's disease patients. But if we are wrong (and the association is confounded or acting as a marker for some other trait) we will increase the rates of lung cancer and serious heart disease for no therapeutic benefit.
Sometimes a randomized trial is an option. But when it is not (as many exposures, like height or air quality, are hard to randomize) then there is a very delicate decision-making process. It is very instructive and important to watch how other fields of study deal with these types of issues.
Edit: It seems everyone has been looking at this question. Some additional very good takes (over and above the Mark Thoma, Matt Yglesias and Mike Konczal linked above) include:
I continue to hope Mark Palko can be convinced to weigh in as well.
Re: We state associations like "people who eat red meat have a higher rate of disease" but then turn that into a recommendation of "eat less red meat". You must admit there is a substantial difference between policy recommendations and implementations.ReplyDelete
From what I am reading, there was tremendous reluctance on the part of the original authors to release the methodology. Now granted, I work in a hard sciences lab, where publishing your methodology is de rigueur so I immediately have tremendous bias against these "magic formulas" they appeared to have employed. I understand the purpose and need for statistical weighting - but I also demand that both sets of data unweighted and weighted be published to demonstrate the weighting is not arbitrary or capricious. To that I am with Charlie Pierce on this:
[T]here is nothing here to disabuse me of my long-held notion that most economists reach their conclusions by cutting up a sheep on a rock and reading the entrails. The original authors appeared to have reached a conclusion and then manipulated the data to support the conclusions. Yes it is a serious charge for scientists, but since these two are economists, I perceive as just more Derrida-lite blathering by people pretending to be earnest and industrious. IOW - if you are stupid enough to listen to them in the first place, you are getting what you deserve.
I previously believed we are a nation of idiots because only an idiot looks to a fool for guidance and leadership. I have since learned we are a nation of morons, aspiring to be imbeciles; we seem to believe that idiots are leaders and consider fools to be demi-gods.