Andrew Gelman was discussing the Lancet article on the safety of hydroxychloroquine. He posited three possible outcomes:
1. The criticisms are mistaken. Actually the research in question adjusted just fine for pre-treatment covariates, and the apparent data anomalies are just misunderstandings. Or maybe there are some minor errors requiring minor corrections.
2. The criticisms are valid and the authors and journal publicly acknowledge their mistakes. I doubt this will happen. Retractions and corrections are rare. Even the most extreme cases are difficult to retract or correct. Consider the most notorious Lancet paper of all, the vaccines paper by Andrew Wakefield, which appeared in 1998, and was finally retracted . . . in 2010. If the worst paper ever took 12 years to be retracted, what can we expect for just run-of-the-mill bad papers?
3. The criticisms are valid, the authors dodge and do not fully grapple with the criticism, and the journal stays clear of the fray, content to rack up the citations and the publicity.I wrote a comment that went into what I see as a fourth possibility: the data is real and the analysis was done as stated but the question, itself, is not fit for observational analysis
I think there is a fourth possibility. We know that studies of drugs for their primary indication are fraught with risk. This was clearly articulated in the literature by 1983 (https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.4780020222) bu Olli Miettinen but was built on a long tradition of problematic cases. More recently we have Tobias Kurth’s paper showing implausible results in cases with medical desperation (https://academic.oup.com/aje/article/163/3/262/59818). Treatment is not random when physicians, patients, and families are desperate.
We also saw this with hormone replacement therapy and CVD; the theory that it would help with CVD ended up channeling the drug to high SES individuals. This is why drug people insist on trials — too many results reverse direction when channeling effects are removed.
There is no possibility that these medications were being given for an indication other than covid-19. There is just not that much co-infection with malaria.
So option #4 is the study is fine. The data (despite some alarming patterns) turns out to be okay. But the result is simply wrong because they are studying an intended effect. Or they could be correct. Sometimes you end up accounting for channeling (people do, in fact, occasionally win the lottery) and the results are consistent with the causal association posited by trials. My favorite paper was looking at tricks to try and do this with observational data.
The real issue is the strength of the conclusions. Of course, the data issues are a separate and concerning factor as well. But I wish we’d show more humility with observational drug research. I do a LOT of it and it annoys me when these basics of interpretation are glossed over with bullet text about observational research having limitations (unknown confounders — grrrrr) and not a recognition that this is why we do trials — because these estimates are inherently unreliable.
#EndofRantI see this as different than Wakefield, who faked data. Or an analysis structure that is incorrect. And people publish these studies all of the time. But they do so with known failure points and the use of the word "associated" is a thin veneer:
We were unable to confirm a benefit of hydroxychloroquine or chloroquine, when used alone or with a macrolide, on in-hospital outcomes for COVID-19. Each of these drug regimens was associated with decreased in-hospital survival and an increased frequency of ventricular arrhythmias when used for treatment of COVID-19.Obviously this paper is only of value for a general medical audience if we are supposed to conclude something about the effectiveness of hydroxychloroquine or chloroquine. I mean this stuff is kind of there in the limitations section, but the conclusion is written without these caveats. The first author, I note, is a cardiologist and not a pharmacoepidemiologist.
Now, this is a long rant but here is the other side of the coin: arrhythmia is not an intended effect. Yes, we can study the unintended drug effects and this paper brings up completely legitimate safety concerns, given that the issues with the data raised by Gelmen and others can be addressed. But that actual useful contribution is being buried under the far more "hot" mortality data.
James Watson even noted:
This caught my eye, as an effect size that big should have been picked up pretty quickly in the interim analyses of randomized trials that are currently happening. For example, the RECOVERY trial has a hydroxychloroquine arm and they have probably enrolled ~1500 patients into that arm (~10,000 + total already). They will have had multiple interim analyses so far and the trial hasn’t been stopped yet.This is definitely a big point. It is totally plausible that arrhythmias could be missed (they are often in clinical practice) especially with such a sick patient population. But the mortality signal is awfully high for the trials not to have noticed (although the WHO is now checking their interim data, so maybe?).
Anyway, I guess this is in my wheelhouse and I just wish we had a better interpretation of these studies in major journals.