Friday, August 27, 2010

Trade-offs between Type I and Type II error

I was reading this blog post by Andrew Gelman on a test that is 100% accurate for alzheimer's disease. Following the initial post up, it appears that the test has only 64% specificity.

But the feature of this discussion that I find the most interesting is that the decision to choose between sensitivity and specificity is a judgment that people seem to be very poor at. Consider pain medication. If you want to make sure that everyone in serious pain gets appropriate pain control than some fraudsters will get illicit narcotics. Alternatively, if you make the screen tight that nobody is able to obtain narcotics via "fake pain" then some real cases will be undertreated.

We see the same thing with releasing people from prison. Even if former prisoners only committed crimes at the rate of the general population, at least some crimes could be prevented by a tougher release policy. Of course, this line of reasoning leads to absurd conclusions -- we could completely eliminate adult crime by jailing everyone for life on their eighteenth birthday. We see the same thing in the Ray Fisman argument for getting rid of 80% of teachers during probation – it is so important not to make a mistake and keep an inferior teacher that we should fail to hire many good teachers just to make sure we have no sub-standard ones.

But people don't seem to like to make these trade-offs. In the case of the test for Alzheimer's disease, the authors could have been a lot more specific if they were willing to give up sensitivity. But, for some reason, people seem to prefer to end up at one extreme of a scale rather than the middle (where the value of the test is maximized).

It's a phenomenon that I wish I understood better.

3 comments:

  1. The other reason for having 100% sensitive tests at the cost of specificity is because of the clinical tradeoffs that occur because you have done that test.

    If, for example, the treatment subsequent to a test is vitamin supplementation which should have next to zero complications then 100% sensitivity is the face of nasty complications caused by non-treatment makes quite a lot of sense.

    I could only guess what the alzheimer's disease argument might be?

    ReplyDelete
  2. That is a really good point. If you have a non-toxic therapy that is effective at reducing risk than it makes a lot of sense to give vitamins to flase positives than to miss people. We see this with people with elevated homocysteine all of the time.

    But the AD argument is much less compelling as there is actual hard for a false positive.

    ReplyDelete
  3. It could also be useful in the context of either cohort or clinical trial design though. You can phenotype a cohort that is at very high risk and then test your prophylactic treatment.

    ReplyDelete