Tuesday, March 3, 2015

Epidemiology Research

I am a big fan of Aaron Carroll, who often blogs at the incidental economist.  However, in his latest New York Times column he says:
Most of the evidence for that recommendation has come from epidemiologic studies, which can be flawed.

Use of these types of studies happens far more often than we would like, leading to dietary guidelines that may not be based on the best available evidence. But last week, the government started to address that problem, proposing new guidelines that in some cases are more in line with evidence from randomized controlled trials, a more rigorous form of scientific research.
So when have randomized controlled trials stopped being a part of epidemiology?  It comes as news to me, who has done this type of work as an epidemiologist.  In particular, there are threats to validity in trials as well, and a lot of smart causal inference research has looked at that as well.  Trials also have concerns about cross-over, attrition, and even valid design.  These elements are all part of a typical epidemiological education and are an important part of public health practice.  Even things like meta-analyses, where trials (and now sometimes observational studies) are pooled are typical parts of epidemiology. 

It seems like he wants to conflate that observational research = epidemiology. 

There is also a difference of estimands.  The trials can only assess interventions in diet and how they perform.  The real (true) intake of the participants is always approximated, except perhaps in a metabolic ward.  Even doubly labeled water studies need to make assumptions. 

The real bug-bear of nutritional research in humans is measurement error.  It is present in all studies (even trials, which are much less susceptible to bias than cohort or case-control studies).  That is a lot of what we struggle with in this research area. 

Now, it is true (and I agree with Aaron Carroll completely) that the trials tell us a lot of what we want to know.  In a real sense we want to know how dietary interventions, as they will actually work out in reality, will change outcomes.  So I share his concern that the trials seemed to be overlooked by people writing guidelines.  Or, in other words, I think his main conclusion is quite sensible.

But let's not forget the observational research is critical to understanding patterns of diet that create the hypotheses and interventions that can actually be tested in trials.  They also give a lot of understanding into how people consume food in the state of nature.  I am never going to stop focusing on high quality data and the most rigorous possible study designs.  But I think it would be wiser to represent the eco-system more completely. 

On the other hand, I am not an expert in health care communications, and it might be that such broad strokes are necessary when focusing on the general public.  After all, improving public health is everyone's goal, and I am happy to take a few "hits" if that is the ultimate outcome.  But I think it's also a challenge to think about how to make this type of research, and the nuances in it, better understood in general.

I have a lot of thinking to do. 

No comments:

Post a Comment