Pop economists (or, at least, pop micro-economists) are often making one of two arguments:
1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.
2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.
Argument 1 is associated with "why do they do that?" sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there's some rational reason for what seems like silly or self-destructive behavior.
Argument 2 is associated with "we can do better" claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.
[sorry for the general link, there seemed to be technical issue linking to the specific article]
I think that this insight is quite compelling and exposes a real issue with popular micro-economics. After all, these two positions mostly contradict each other!
I see this contraction arising for different reasons than Andrew does. I think that economists are often forced to make strong assumptions in order to deal with the sorts of complex problems that they look at. So they presume rationality on the part of all actors when trying to explain problems (i.e., descprive work). But when they want to improve matters they shift and reject this assumption (as it would suggest everything is already optimized). So they make different (strong and unverifiable) assumptions depending on whether they are trying to explain behavior or give guidence to improve outcomes.
What seems to be more alarming to me that is that this pool of economists often don't "sanity check" the effect size of their analysis. Perhaps it is my background in epidemiology, but if I see a hazard ratio of 5 then I am immediately suspicious that it is too good to be true. However, Ray Fisman can see an analysis that suggests firing 80% of teachers and not necessarily wonder if perhaps there is an overly strong assumption in his analysis (like the pool of new potential teachers not having finite limits or in the real sensitivity of the evaluation process).