There was a nice discussion of the plausibility of the employment figures in the new Paul Ryan 2012 budget proposal for the United States by both Andrew Gelman and Paul Krugman. While this blog isn't really a political one, I do think that this current discussion is a good example of how to critically evaluate models in epidemiology. It is pretty rare that a model will be simply and obviously wrong. Instead, you have to look at the all of the different elements of the model and see what looks dodgy. After all, the actual headline result is almost always something for which we are uncertain about the actual answer. So we have to look for clues as what might be going wrong by looking at the other outputs of the model (and perhaps some of the modeling assumptions).
If a study shows that the use of statin class drugs prevents cancer that is a pretty interesting finding. But the finding gets less interesting if further exploration reveals that statins prevent all forms of disease except for cardiovascular disease. The latter would be a clue that something, somewhere, is going wrong.
In the case of the Paul Ryan budget, it seems like this estimate of unemployment is lower than it should plausibly be which might obfuscate the idea trade-off between taxes and economic growth. I am not an economist (in any way, shape or form) but am willing to conjecture that his dynamic scoring algorithm for the influence of tax cuts on unemployment might be an issue. Perhaps the algorithm should account for diminishing returns as unemployment falls (but fails to do so properly). Or maybe the model overstates the magnitude of the underlying relation (or, possibly, it might reverse it). Complex models have a lot moving parts and there are a lot of places that bias can be introduced into them. So it’s important to be critical (of both our own work and the work of others) when we try and do this type of difficult forecasting.
No comments:
Post a Comment