Showing posts with label Steven Levitt. Show all posts
Showing posts with label Steven Levitt. Show all posts

Friday, October 26, 2012

When you start a sentence with "one good indicator of a person who’s not so smart," you should be extra careful about what you say next

Andrew Gelman spends some time on this latest quote from Steven Levitt on the rationality of voting:
DUBNER: So Levitt, how can you in your life, when you wander around, tell the difference between a smart person and a not-so-smart person?

LEVITT: Well, one good indicator of a person who’s not so smart is if they vote in a presidential election because they think their vote might actually decide which candidate wins. . . . there has never been and there never will be a vote cast in a presidential election that could possibly be decisive.
Gelman has been riding this beat for a long time, repeatedly pointing out the flaws in this strangely persistent argument. He makes a good case (part of which I basically paraphrase in point one), but there are other problems with Levitt's claims.

Here's a brief and certainly incomplete list of objections.

1. Every vote affects the probability distribution of a race, and since the difference in outcomes is so large, even a tiny change in probabilities can conceivably create a detectable change in expected value

2. Every vote in every race. Except for undervoting, we're talking about the combined impact for the entire ballot.

3. This isn't binary. The margin of a win can affect:

Perceived mandate and political capital;

Officials' decisions (particularly in non-term-limited positions). Congressmen who win by large margins are less likely to feel constrained about unpopular votes;

Funding. A lopsided defeat can make it harder for a candidate or a state party to raise money;

Party strategy. How much effort do you expend finding a challenger against an official who beat you by more than ten points last time?;

Media narrative.It's possible to come back after press corp has labeled you a loser, but it isn't easy.

and finally

4. The system works better with higher response rates. It's more stable and harder to game. Perhaps even more important, it does a better job representing the will of the governed.

That's the top of my head list. Undoubted, I missed some.

Gelman goes on:
I would not conclude from the above discussion that Levitt is not so smart. Of course he’s very smart, he just happens to be misinformed on this issue. I applaud Levitt’s willingness to go out on a limb and say controversial things in a podcast, to get people thinking. I just wish he’d be a bit less sure of himself and not go around saying that he thinks that Aaron, Noah, Nate and I are not so smart.
He's being overly diplomatic. Levitt isn't just misinformed; he's willfully misinformed. In issue after issue (drunk driving, car seats, solar energy) he has used sloppy reasoning to reach a controversial position, then has done his best to turn a deaf ear to those who pointed out his errors. We did get a partial retraction of his claims on driving*, but on others he has doubled down and occasionally resorted to cheap shots at those who disagreed with him.

Levitt is very smart. That's what makes this sort off thing so difficult to overlook.




* Though still leaving potential errors unacknowledged, such as the likely possibility that drivers in accidents are more likely to be checked for intoxication than pedestrians, that a stricter standard might be used, that many of the most intoxicated are prevented from driving and that intoxication is more likely to be noted in official records for drivers

Saturday, July 16, 2011

Anybody's culpa?

In what might be considered a commissioned blog post, Jonathan Robinson manages to dig up some comments that Steven Levitt would probably like to rebury.



It is, of course, possible to make too much of a bad call or even a bad argument. If the penalty for being wrong is too high it inhibits the conversation, but there does have to be some kind of accountability. When you're this far off on something this big, you really ought to do something: modify your reasoning, concede that your assumptions may need some work, maybe even just acknowledge the error but dismiss it as an anomaly.

Instead, we have a debate where, as Paul Krugman has repeatedly pointed out, no one ever has to admit he or she is wrong. You will occasionally see someone step up, but it's strictly done on a volunteer basis.

Maybe we need to go beyond the honor system on this one.

Tuesday, May 3, 2011

Apparently, the lot of some economists is to be painfully condescending

Particularly freshwater economists who seem determined to make a leitmotif out of boastful complaints about the burdens of being smarter and more logical than the rest of us. Here's Will Wilkinson with a recent example:
The most curious thing about Mr Krugman's quasi-religious squeamishness about the "commercial transaction" is that it is normally the economist's lot to explain to the superstitious public the humanitarian benefits of bringing human life ever more within the cash nexus.
Wilkinson's entire post is (unintentionally) interesting and you should definitely take a look at it (though you should also take a look at the rebuttals here and here), but for a distillation of the freshwater mindset, you really can't beat the line about 'the economist's lot.' (I suspect Wilkinson may have been going for humorous wording here but I doubt very much he was joking.)

For a less pithy though perhaps more instructive example, consider these comments Steve Levitt made on Marketplace:
One of the easiest ways to differentiate an economist from almost anyone else in society is to test them with repugnant ideas. Because economists, either by birth or by training, have their mind open, or skewed in just such a way that instead of thinking about whether something is right or wrong, they think about it in terms of whether it's efficient, whether it makes sense. And many of the things that are most repugnant are the things which are indeed quite efficient, but for other reasons -- subtle reasons, sometimes, reasons that are hard for people to understand -- are completely and utterly unacceptable.
As I said at the time:
There are few thoughts more comforting than the idea that the people who disagree with you are overly emotional and are not thinking things through. We've all told ourselves something along these lines from time to time.

But can economists really make special claim to "whether [ideas] makes sense"? Particularly a Chicago School economist who has shown a strong inclination toward the kind of idealized models that have great aesthetic appeal but mixed track records? (This is the same intellectual movement that gave us rational addiction.)

When I disagree with Dr. Levitt, it's for one of the following reasons:

I question his analyses;

I question his assumptions;

I question the validity of his models.

Steve Levitt is a smart guy who has interesting ideas, but a number of intelligent, clear-headed individuals often disagree with him. Some of them are even economists.

Saturday, January 22, 2011

Note to Gelman -- first fill its mouth with salt, then light candles, then decapitate

Andrew Gelman is once again going after the voting-is-irrational zombie (disinterred this time by the Freakonomics team). Gelman shows, using estimates that if anything err on the conservative side, that the possibility of influencing an election, though small, can still easily be associated with a reasonable expected value.

This particular zombie has been shambling through the dark corridors of pop econ books and columns for years now (Gelman himself has been monster hunting since at least 2005), but every time the creature seems truly dead and buried, along comes someone like Landsburg or Levitt, someone who's smart enough and mathematically literate enough to know better, but who just can't resist digging up the grave.

Wednesday, December 29, 2010

Warzones and medical research

There was a fascinating story on today's All Things Considered about the ways that our experiences with the Afghanistan and Iraq wars have added to our understanding of emergency medicine:
The medevac choppers land and then taxi over to the gate just outside the emergency room, where gurneys are waiting. Nightfall has brought a bone-chilling wind, and a gang of nurses and orderlies rushes four patients into the warmth of the ER.

It's more than warm inside. In fact it's 100 degrees. It's the first clue that this hospital — the Joint Theater Hospital at Afghanistan's Bagram Air Field — is a little different. Through years of war, combat surgeons have learned that hypothermia is a big risk in patients with significant blood loss. Nine years of conflict in Iraq and Afghanistan have brought some grim benefits: a new wealth of knowledge about treating war wounds.

"At the beginning of this conflict, we were taking the best trauma medicine from the civilian sector, and we brought it to Iraq and Afghanistan," says U.S. Air Force Col. Chris Benjamin, the hospital commander. He says now his doctors tell him it's the other way around.
This got me thinking about another story I heard on the same public radio station last night, the subject of my previous post. In it Steve Levitt said:
One of the easiest ways to differentiate an economist from almost anyone else in society is to test them with repugnant ideas. Because economists, either by birth or by training, have their mind open, or skewed in just such a way that instead of thinking about whether something is right or wrong, they think about it in terms of whether it's efficient, whether it makes sense.
In health and medicine, researchers (some of whom are, admittedly, economists) don't seem to have any trouble getting past the repugnance of ideas like using controversial wars as data gathering opportunities. It's true that these researchers pass up some data that is considered ethically tainted but this has nothing to do with the mentality of the researchers and everything to do with a set of ethical rules that many researchers consider to be overly restrictive and due for an overhaul.

Given these and other counterexamples, Dr. Levitt's quote may, more than anything else, tell us something about the way many economists see themselves.

Tuesday, December 28, 2010

Freakonomics: disagreeing about why we disagree

On today's Marketplace, Steve Levitt explains why he thinks many people see the world differently than he does:
One of the easiest ways to differentiate an economist from almost anyone else in society is to test them with repugnant ideas. Because economists, either by birth or by training, have their mind open, or skewed in just such a way that instead of thinking about whether something is right or wrong, they think about it in terms of whether it's efficient, whether it makes sense. And many of the things that are most repugnant are the things which are indeed quite efficient, but for other reasons -- subtle reasons, sometimes, reasons that are hard for people to understand -- are completely and utterly unacceptable.
There are few thoughts more comforting than the idea that the people who disagree with you are overly emotional and are not thinking things through. We've all told ourselves something along these lines from time to time.

But can economists really make special claim to "whether [ideas] makes sense"? Particularly a Chicago School economist who has shown a strong inclination toward the kind of idealized models that have great aesthetic appeal but mixed track records? (This is the same intellectual movement that gave us rational addiction.)

When I disagree with Dr. Levitt, it's for one of the following reasons:

I question his analyses;

I question his assumptions;

I question the validity of his models.

Steve Levitt is a smart guy who has interesting ideas, but a number of intelligent, clear-headed individuals often disagree with him. Some of them are even economists.

Tuesday, March 23, 2010

More questions about the statistics of Freakonomics

Felix Salmon is on the case:

There’s a nice empirical post-script to the debate over the economic effects of classifying the Spotted Owl as an endangered species. Freakonomics cites a study putting the effect at $46 billion, but others, including John Berry, who wrote a story on the subject for the Washington Post, think it’s much closer to zero.

And now it seems the Berry side of the argument has some good Freakonomics-style panel OLS regression analysis of the microeconomy of the Pacific Northwest to back up its side of the argument. A new paper by Annabel Kirschner finds that unemployment in the region didn’t go up when the timber industry improved, and it didn’t go down when the timber industry declined — not after you adjust for much more obvious things like the presence of minorities in the area.

Saturday, February 27, 2010

Meta-Freakonomics

Joseph recently wrote a post referring to this post by Andrew Gelman (which was based on a series of posts by Kaiser Fung which check the veracity of various claims in Superfreakonomics -- welcome to the convoluted world of the blogosphere). Joseph uses Dr. Gelman's comments about the poor editing and fact-checking of the book to make a point about the disparity between the contribution editing makes and how little we reward it. He ought to know; I have frequently taken advantage of his good nature in this area, but at the risk of being ungrateful, I don't think the point applies here. Rather than being helpful, the kind of criticism Joseph and Gelman describe could only hurt Superfreakonomics.

Or put another way, if we approach this using the techniques and assumptions of the Freakonomics books, we can show that by foregoing a rigorous internal review process the authors were simply acting rationally.

Before we get to the actual argument, we need to address one more point in Joseph's post. Joseph says that providing critical read "is one of the most helpful things a colleague can do for you, yet one of the least rewarded." This statement is absolutely true for easily 99.9% of the books and manuscripts out there. It is not, however, true for the Freakonomics books. Between their prestige and the deep pockets of William Morrow, Levitt and Dubner could have gotten as many highly-qualified internal reviewers as they wanted, reviewers who would have been compensated with both an acknowledgment and a nice check. (Hell, they might even get to be in the movie.)

But if the cost and difficulty of putting together an all-star team of reviewers for Superfreakonomics would have been negligible, how about the benefits? Consider the example of its highly successful predecessor. Freakonomics was so badly vetted that two sections (including the book's centerpiece on abortion) were debunked almost immediately. The source material for the KKK section was so flawed that even Levitt and Dubner disavowed it.

These flaws could have been caught and addressed in the editing process but how would making those corrections help the authors? Do we have any reason to believe that questionable facts and sloppy reasoning cost Levitt and Dubner significant book sales (the book sold over four million copies)? That they endangered the authors' spot with the New York Times? Reduced in any way the pervasive influence the book holds over the next generation of economists? Where would Levitt and Dubner have benefited from a series of tough internal reviews?

Against these elusive benefits we have a number of not-so-hard-to-find costs. While the time and money required to spot flaws is relatively minor, the effort required to address those flaws can be substantial.

Let's look at some specifics. Kaiser Fung raises a number of questions about the statistics in the "sex" chapter (the one about female longevity is particularly damning) and I'm sure he overlooked some -- not because there was anything wrong with his critique but because finding and interpreting reliable data on a century of sex and prostitution is extraordinarily difficult. It involves measurement covert behavior that can be affected by zoning, police procedures, city politics, shifts in organized crime,and countless other factors. Furthermore these same factors can bias the collection of data in nasty and unpredictable ways.

Even if all of the sex chapter's underlying economics arguments were sound (which they are, as far as I know), there would still have been a very good chance that some reviewer might have pointed out flawed data, discredited studies, or turned up findings from more credible sources that undercut the main hypotheses. That doesn't mean that the chapter couldn't be saved -- a good team of researchers with enough time could probably find solid data to support the arguments (assuming, once again, that they were sound) but the final result would be a chapter that would look about the same to the vast majority of readers and external reviewers -- all cost, no benefit.

Worse yet, think about the section on the relative dangers of drunken driving vs. drunken walking. These cute little counter-intuitive analyses are the signature pieces of Levitt and Dubner (and were associated with Dr. Levitt before he formed the team). They are the foundation of the brand. Unfortunately, counter-intuitive analyses tend to be fragile creatures that don't fare that well under scrutiny (intuition has a pretty good track record).

The analysis of modes of drunken transportation would be one of the more fragile ones. Most competent internal reviewers would have had the same reaction that Ezra Klein had:
You can go on and on in this vein. It's terrifically shoddy statistical work. You'd get dinged for this in a college class. But it's in a book written by a celebrated economist and a leading journalist. Moreover, the topic isn't whether people prefer chocolate or vanilla, but whether people should drive drunk. It is shoddy statistical work, in other words, that allows people to conclude that respected authorities believe it is safer for them to drive home drunk than walk home drunk. It's shoddy statistical work that could literally kill somebody. That makes it more than bad statistics. It makes it irresponsible.
Let me be clear. I am not saying that Levitt and Dubner knew there were mistakes here. Quite the opposite. I'm saying they had a highly saleable manuscript ready to go which contained no errors that they knew of, and that any additional checking of the facts, the analyses or logic in the manuscript could only serve to make the book less saleable, to delay its publication or to put the authors in the ugly position of publishing something they knew to be wrong.

Gelman closes his post with this:
It's the nature of interesting-but-true facts that they're most interesting if true, and even more interesting if they're convincingly true.
Perhaps, but Levitt and Dubner have about four million reasons that say he's wrong.