Thursday, February 24, 2011

Evidence

John D Cook has a very nice post up about evidence in science:

Though it is not proof, absence of evidence is unusually strong evidence due to subtle statistical result. Compare the following two scenarios.

Scenario 1: You’ve sequenced the DNA of a large number prostate tumors and found that not one had a particular genetic mutation. How confident can you be that prostate tumors never have this mutation?

Scenario 2: You’ve found that 40% of prostate tumors in your sample have a particular mutation. How confident can you be that 40% of all prostate tumors have this mutation?

It turns out you can have more confidence in the first scenario than the second. If you’ve tested N subjects and not found the mutation, the length of your confidence interval around zero is proportional to N. But if you’ve tested N subjects and found the mutation in 40% of subjects, the length of your confidence interval around 0.40 is proportional to √N. So, for example, if N = 10,000 then the former interval has length on the order of 1/10,000 while the latter interval has length on the order of 1/100. This is known as the rule of three. You can find both a frequentist and a Bayesian justification of the rule here.

Absence of evidence is unusually strong evidence that something is at least rare, though it’s not proof. Sometimes you catch a coelacanth.


Now it is true that this approach can be carried too far. The comments section has a really good discussion of the limitations of this type of reasoning (it doesn't handle sudden change points well, for example).

But it is worth noting that a failure to find evidence (despite one's best attempts) does tell you something about the distribution. So, for example, the failure to find a strong benefit for users of Vitamin C on mortality, despite a number of large randomized trials, makes the idea that this drug is actually helpful somewhat less likely. It is true we could look in just one more population and find an important effect. Or that it is only useful in certain physiological states (like the process of getting a cold) which are hard to capture in a population based study.

But failing to find evidence of the association isn't bad evidence, in and of itself that the association is unlikely.

P.S. For those who can't read the journal article, the association between Vitamin C and mortality is Relative Risk 0.97 (95% Confidence Interval:0.88-1.06), N=70 456 participants (this includes all of the trials).

3 comments:

  1. I'm not sure why the results such as RR=0.97 (95% CI: 0.88-1.06) would be described as "no evidence".

    To the contrary, to me this is (presence of) evidence, and, indeed, quite a good deal of it - i.e. it's strong evidence (statistically), which is hardly surprising given the sample size.
    It's just that this particular evidence happens to detract from the hypothesis of the effect of vitamin C rather than supports it.

    Only if the confidence interval were considerably wider could these results be described as being tantamount to "no evidence" - although even then, strictly speaking, it still wouldn't be a complete absence of evidence, so long as the confidence limits don't range from minus infinity to plus infinity.

    ReplyDelete
  2. "You’ve sequenced the DNA of a large number prostate tumors and found that not one had a particular genetic mutation. How confident can you be that prostate tumors never have this mutation?"

    I think that the question of interest is whether prostate tumors are associated with that mutation. And indeed, there is strong evidence of a lack of association.

    However, the question of whether **any** prostate tumor will have that mutation is essentially Hempel's Raven, and the evidence that none will is weak.

    ReplyDelete
  3. Sorry, I misspoke.

    There is strong evidence for a negative association, not a lack of association.

    ReplyDelete