Tuesday, April 29, 2014

Problems that (nearly) rich people have -- college edition

Yet another one of those posts that I started weeks ago as part of the big SAT thread then didn't get around to posting.

What are the major concerns of high school students applying for college? It's a long list but based on having worked with high school kids (primarily in urban and rural areas including Watts and the Mississippi Delta), I'd probably say:

Finding the money to pay for it;

Being able to finish in four years;

Avoiding remedial courses.

If, on the other hand, I was going to make my list based on what I read in the New York Times, the number one concern would clearly be not getting into the college of your choice.
[The SAT] was one of the biggest barriers to entry to the colleges [students] dreamed of attending.
I don't want to whitewash the issues with SAT and its role in college selection. The test has a history of being misused and there are real concerns about cultural biases in the verbal section, but even with these problem, the NYT's assertion simply isn't true for most students. For kids hoping to find a way to cover rent and groceries while attending local community colleges or four-year  schools, fear of a bad SAT simply isn't a high priority concern.

It is, however, for one segment of the population, namely the well-off.

I'm not talking about the rich. For people with serious money, there really aren't big barriers to getting kids into an elite school. I'm talking about roughly the top ten percent minus the top one half, people who have the money to cover a pricey tuition and to get their kids in the schools and settings where Ivy League admissions are fairly common. In other words, these are families with the resources to get their kids in range of prestigious schools.

The coverage of the SAT in major publications has been written almost entirely from the viewpoint of that nine and a half percent. This is, of course, not the first time we've seen the press (particularly the NYT) write from this perspective. A few years ago, we heard a great deal about how difficult it could be for a family to get by on between $250,000 to $350,000 in taxable income.

We could speculate on the underlying causes for this slant, but I think the important part is that the people writing and editing these stories seem completely unaware of how the world looks to the bottom 90%.

Monday, April 28, 2014

More on understanding the math but not the statistics

[one of the standard rebuttals to criticisms of popular STEM writing is that certain compromises have to be made when putting things in 'laymen' s terms.' To head off that particular charge, I'm going to use as little technical language as possible in this post.]

Before I post something, I usually do one final search on the subject, just to avoid any surprises. As a result, I often discover better examples than the ones I used in the post. Case in point, after writing a post looking at the pre-538 work of Walt Hickey (and concluding that the editors at 538 appeared to be doing a better job than those at Business Insider), I found this article by Hickey from the Atlantic:

5 Statistics Problems That Will Change The Way You See The World

It was a fairly standard piece (the kind that invariably includes the Monty Hall paradox) and I skimmed through it quickly until the final section which I found myself reading repeatedly to make it actually said what I thought it said:
(5) SIMPSON'S PARADOX
A kidney study is looking at how well two different drug treatments (A and B) work on small and large kidney stones. Here is the success rate that was found:
Small Stones, Treatment A: 93%, 81 out of 87 trials successful
Small Stones, Treatment B: 87%, 234 out of 270 trials successful
Large Stones, Treatment A: 73%, 192 out of 263 trials successful
Large Stones, Treatment B: 69%, 55 out of 80 trials successful. 
Which is the better treatment, A or B? 
ANSWER: TREATMENT B 
Even though Treatment A had higher success rates in both small and large stones, when the whole trial is viewed as a sample space Treatment B is actually more successful: 
Small Stones, Treatment A: 93%, 81 out of 87 trials successful
Small Stones, Treatment B: 87%, 234 out of 270 trials successful
Large Stones, Treatment A: 73%, 192 out of 263 trials successful
Large Stones, Treatment B: 69%, 55 out of 80 trials successful.
All stones, Treatment A: 78%, 273 of 350 trials successful
All stones, Treatment B: 83%, 289 of 350 trials successful. 
This is an excellent example of Simpson's Paradox, where correlation in separate groups doesn't necessarily translate to the whole sample set. 
In short, just because there correlation in smaller groups hides the real story taking place in the largest of groups.
This is an almost perfect example of what I mean by understanding the math but not the statistics. The math, though somewhat counterintuitive (as you would expect from a 'paradox'), is straightforward: in certain situations it is possible to have observations of a data set distributed in such a way that, if you cut the set up along certain lines, two variables will have a positive correlation in each subsection but will have a negative correlation when you put them together. It's an interesting result -- cut things one way and you see one thing, cut them another and you see the opposite -- but it doesn't seem particularly meaningful and it certainly doesn't suggest that one view is right and the other is wrong. The result is just ambiguous. ("This is an excellent example of Simpson's Paradox, where correlation in separate groups doesn't necessarily translate to the whole sample set, causing ambiguity.")

When, however, you start thinking not just mathematically but statistically (and more importantly, causally), one view is very much better than the other. Let's look at the kidney stone example again. What we see here is a lot more patients with large stones being given treatment A and a lot more patients with small stones being given treatment B. This is something we see all the time in observational data, more powerful treatments being given to more extreme cases.

This is one of the first things a competent statistician checks for because that relationship we see in the undivided data set is usually covering up the relationship we're looking for. In this case, the difference we see in the partitioned data is probably due to the greater effectiveness of treatment A while the difference we see in the unpartitioned data is almost certainly due to the greater difficulty in treating large kidney stones. Though there are certainly exceptions, statisticians generally combine data when they want larger samples and break it apart when they want a clearer picture.

The version posted at Business Insider with a later timestamp has a different conclusion ("Answer: Treatment A, once you focus on the subsets"). This appears to be a corrected version possibly in response to this comment:
KSC on Nov 13, 12:33 PM said:
After reading the wikipedia article I believe your answer in the Simpson's paradox example is incorrect.
Treatment B is not better. Treatment A is better.
As pointed out in the article Treatment B appears better when looking at the whole sample because the treatments were not randomly assigned to small and large stone cases.
The better treatment (A) tended to be used on the more difficult cases (large stones) and the weaker treatment (B) tended to be used on the simpler cases (small stones).
Even in the corrected version, though, Hickey still closes his badly garbled conclusion with "correlation in smaller groups hides the real story taking place in the largest of groups." Between that and the odd wording of the unacknowledged correction (A is better, period. When we "focus on the subsets," we control for another factor that obscured the results), it seems that Hickey didn't understand his mistake even after having it was explained to him.

Though I've had some rather critical things to say about 538 recently, there's no question that its publisher and editors do understand statistics. These days, that's' enough to put them ahead of the pack.

Sunday, April 27, 2014

Adam Smith is a deeper thinker than he is often given credit for

A very nice extraction of some of Adam Smith's views is here at the Monkey Cage.  A couple of key passages:
For instance, he described Holland as the most advanced and prosperous economy in his time. His explanation was simple but critical: Every “man of business” was forced to work because rates of profit were low (about 3 percent). With such low returns and little capital accumulation, it was “impossible” for anyone “to live upon the interest of their money.” This was the key to economic success for Smith: fundamentals forcing everyone to work. But you can’t get concentration of wealth in such a system.
and

And not only is the taxation of inheritance advisable (except for minors), but taxation is a tool to micromanage incentives, especially for the spendthrift rich. His priorities are clear: The “inequality of the worst kind” is when taxes “fall much heavier upon the poor than upon the rich.” Which is why the rich should be taxed “something more than in proportion.” Smith in fact praises the British tax system, which taxed twice as much per capita as France, because “no particular order is oppressed:” The rich were taxed, unlike in France. Smith had only one criterion: Taxes should encourage the productive use of capital. 
In other words, the goal of Adam Smith's view of economics is to make it hard to acquire wealth so that everyone is forced to work (to be productive).  It is a sensible view coming on the heels of centuries of upper-classes defined by their inheriting of great wealth, and the consequent stagnation of European innovation. 

Worth reflecting on . . .



Saturday, April 26, 2014

Weekend blogging -- string pragmatists and legal meta-information, two more reasons I wish I could embed CBS clips

I understand their reasons (one of these days I need to do something on the mismanagement of Hulu by way of comparison), but this post would work a lot better if I could include these clips instead of sending you to CBS.com.

The first is from the very sharp NSA arc of the Good Wife. Keep in mind, I come in with a strong prejudice against these stories. Compared with issues like mass incarceration and stop-and-frisk, the NSA hardly even registers on my list of civil liberty concerns. I've found most of the airtime spent on the topic sanctimonious and annoying, but show runners Robert and Michelle King downplayed the outrage and instead dug into the narrative possibilities.

The Good Wife has always been a show about information, meta-information and game theory -- what I know, what you know and what I know you know -- so massive wiretapping of law firms and politicians fits in perfectly. As always, one of the pleasures of the show is watching characters immediately shift tactics and strategies as each new piece of information breaks. For example, check out how the governor of Illinois (as you would expect, currently under investigation), reacts to learning that he's in the middle of a three-hop warrant.

The other clip I wish I could embed is the opening sequence of the Big Bang Theory. The subject matter is completely different (not like you were expecting a smooth segue anyway), but I suspect anyone out there involved with academic research will enjoy hearing the definition of 'string pragmatist.'

Friday, April 25, 2014

With many standard stories, what gets left out is often the best part

This is another zombie example post, sort of a follow-up to the tulipmania rant, but in a bit of a different subgenre. That post focused on the way a trivial aspect of something is treated as the key element. This post is concerned with truncated stories, rhetorical jokes that chop off the punchline. These are stories you hear frequently but almost invariably with the best part left out.

Case in point, the Brown Bunny.

A recent feature on the Rolling Stone website reminded me of this story.  In case you don't know it, here's the account (slightly revised for this blog's general audience):
A memo to aspiring filmmakers: You can spend a large amount of your running time doing virtually nothing — hell, you can even be as narcissistic as anyone in showbiz — so long as you cap off your movie with a starlet [to borrow a line from an old Hill Street Blues, performing an act of non-reproductive intimacy]. That's the main takeaway from writer-director-actor Vincent Gallo's pet project about a motorcycle rider who does, well, not much more than brood. But the reason we still talk about this movie (beside the fact that it gave birth to a world-class spat between Gallo and critic Roger Ebert) is a lengthy scene near the end in which Gallo's costar, Chloe Sevigny, [performs an act of non-reproductive intimacy].
Those two details -- the feud with Ebert and the gratuitous sex scene -- are the basis of the standard narrative about the film. There are two basic variations to this story with two conflicting morals: The first and much more common is the one you find in the Rolling Stone piece – a parable of the dangers of cinematic self-indulgence; The second found in the more cutting edge set is an example of a close-minded bourgeois critic who does not understand avant-garde filmmaking.

The trouble with both these stories is that they leave out the best part of the feud. It is true that Ebert violently complained about the original film he saw at Cannes, but what he objected to was the editing, and despite the colorful back and forth, after the dust had settled, Gallo went back to the editing room and basically did everything Ebert wanted him to do. Ebert then rewrote his review and gave the film three stars and a "thumbs up" on his TV show. Here's an excerpt of his review of the new version:
The audience was loud and scornful in its dislike for the movie; hundreds walked out, and many of those who remained only stayed because they wanted to boo. Imagine, I wrote, a film so unendurably boring that when the hero changes into a clean shirt, there is applause. The panel of critics convened by Screen International, the British trade paper, gave the movie the lowest rating in the history of their annual voting.

But then a funny thing happened. Gallo went back into the editing room and cut 26 minutes of his 118-minute film, or almost a fourth of the running time. And in the process he transformed it. The film's form and purpose now emerge from the miasma of the original cut, and are quietly, sadly, effective. It is said that editing is the soul of the cinema; in the case of "The Brown Bunny," it is its salvation.

Critics who saw the film last autumn at the Venice and Toronto festivals walked in expecting the disaster they'd read about from Cannes. Here is Bill Chambers of Film Freak Central, writing from Toronto: "Ebert catalogued his mainstream biases (unbroken takes: bad; non-classical structure: bad; name actresses being aggressively sexual: bad) ... and then had a bigger delusion of grandeur than 'The Brown Bunny's' Gallo-centric credit assignations: 'I will one day be thin but Vincent Gallo will always be the director of 'The Brown Bunny.' "

Faithful readers will know that I admire long takes, especially by Ozu, that I hunger for non-classical structure, and that I have absolutely nothing against sex in the cinema. In quoting my line about one day being thin, Chambers might in fairness have explained that I was responding to Gallo calling me a "fat pig" -- and, for that matter, since I made that statement I have lost 86 pounds and Gallo is indeed still the director of "The Brown Bunny."

But he is not the director of the same "Brown Bunny" I saw at Cannes, and the film now plays so differently that I suggest the original Cannes cut be included as part of the eventual DVD, so that viewers can see for themselves how 26 minutes of aggressively pointless and empty footage can sink a potentially successful film. To cite but one cut: From Cannes, I wrote, "Imagine a long shot on the Bonneville Salt Flats where he races his motorcycle until it disappears as a speck in the distance, followed by another long shot in which a speck in the distance becomes his motorcycle." In the new version we see the motorcycle disappear, but the second half of the shot has been completely cut. That helps in two ways: (1) It saves the scene from an unintended laugh, and (2) it provides an emotional purpose, since disappearing into the distance is a much different thing from riding away and then riding back again.
...
Gallo allows himself to be defenseless and unprotected in front of the camera, and that is a strength. Consider an early scene where he asks a girl behind the counter at a convenience store to join him on the trip to California. When she declines, he says "please" in a pleading tone of voice not one actor in a hundred would have the nerve to imitate. There's another scene not long after that has a sorrowful poetry. In a town somewhere in the middle of America, at a table in a park, a woman (Cheryl Tiegs) sits by herself. Bud Clay parks his van, walks over to her, senses her despair, asks her some questions, and wordlessly hugs and kisses her. She never says a word. After a time he leaves again. There is a kind of communication going on here that is complete and heartbreaking, and needs not one word of explanation, and gets none.

In the original version, there was an endless, pointless sequence of Bud driving through Western states and collecting bug splats on his windshield; the 81/2 minutes Gallo has taken out of that sequence were as exciting as watching paint after it has already dried. Now he arrives sooner in California, and there is the now-famous scene in a motel room involving Daisy (Chloe Sevigny). Yes, it is explicit, and no, it is not gratuitous.
[If you're curious about the rest of the back story you should also check out Ebert's conversation with Gallo after the re-edit.]

You often hear critics of narrative journalism say that it sees patterns where none exist or that its practitioners are too quick to converge on a common viewpoint. I'm in complete agreement, but sometimes the thing that bothers me the most is just how bad many of these stories are, boring, hackneyed, simplistic. Surprisingly often, what actually happened made a better story  before it was crafted into a journalistic narrative.

In this case, at least for me, the often-omitted ending is the only thing that makes this story interesting.

There is, however, hope for the unjustly truncated standard narrative. For years, Van Halen's "no brown m&ms" clause was held up as the ultimate example of childish rock-star excess. Now, though, thanks in part to the good people at Snopes, the full (and much better) story has become the new standard.



Brown M&Ms from Van Halen on Vimeo.

Thursday, April 24, 2014

Why I criticize 538 more than Business Insider (mainly because I don't read Business Insider)

Okay, that's not really true. I do check out the occasional Business Insider article when it is recommended by one of the bloggers I follow and I do have other reasons for discussing 538. Silver's website is new and newsworthy and it publishes a number of important writers (including Silver himself) whom anyone interested in statistics should read. For this and other reasons, 538 has become ground zero for discussions about how the media should cover data.

Unfortunately, one side effect of all this attention has been to create the impression of implicit comparisons. When we talk about the weaker articles in 538 because we think the direction of the website is important, we can leave people thinking that weak articles are disproportionately found in 538. That is by no means a sound conclusion.

With the obvious exception of Roger Pielke Jr., my least favorite 538 hire is probably Walt Hickey (though concerned, I'm reserving judgement on Emily Oster for the moment). Hickey seems like a nice, well-intentioned fellow, but from what I've seen, he's an excellent example of one of those data journalists who understand the math but not the statistics, getting the procedures right but missing the point (this is somewhat analogous to Feynman's comments about textbook writers missing the nuances of math and science).

I decided to Google him. It turns out that Hickey (going under the slightly more businesslike 'Walter') was a prolific contributor to Business Insider (among other sites). Since he seemed to be doing a lot of entertainment reporting for 538, I looked for something similar on BI and came up with this:

Here's Where All The Miley Cyrus Haters Live

The metric used was the addresses (five-digit zip only) of the 158 complaint letters sent to the FCC after Miley Cyrus's performance at MTV's VMA award show. This is not a good data set but it is possible to do some mildly interesting demographic breakdowns. It's not as good as it would have been with nine-digit zips (those open up a lot of useful information), but you could, for example, look at things like city size.



But what you would never want to do with 158 addresses is a state-by-state breakdown.


This was followed by a list of "the top ten most irate states, based on letters sent per capita" with the sparsely populated South Dakota coming in at number four based on just one letter to the FCC.

 (as a side note, when I went back to the article to write this I tried to find it again by searching Business Insider for Miley Cyrus. Big mistake. You would not believe how many posts came up.)

Feel free to discuss this graph, but the point I want to make is that based on this and the other articles I looked at, Hickey appears to have improved considerably when he moved to 538.  I'm still not impressed with the work he's doing now, but that's an absolute, not a relative statement. Furthermore, this case raises some real questions about Noah Smith's claim that  "In sum, this so-called “data-driven” website is significantly less data-driven (and less sophisticated) than Business Insider or Bloomberg View or The Atlantic."

Wednesday, April 23, 2014

Believe it or not, we've been talking about the nice Krugman -- some perspective on the 538 debate

[You may have trouble getting past the NYT firewall on these. If so, the easiest way around this is either to Google name and author or do what I did and go here for a complete set.]

One of the side questions in the ongoing 538 debate is whether or not Nate Silver and his writers are being excessively criticized. There is certainly some truth to the charge (for reasons I'll get into later), but it's also important to remember that, to a remarkable extent, Silver walked into a bar fight, a number of intense, ongoing debates about science and statistics, some of which had long ago turned quite nasty.

Quite a few of those fights involved Freakonomics, and the topic of climate change in particular and contrarian data journalism in general. The hiring of Roger Pielke Jr. and Emily Oster raised the specter of those two issues respectively. It was pretty much inevitable that the association would heighten the criticism of 538. That association does not mean, however, that the two are being equated. As far as I can tell, the tone of criticism of Silver within the analytic community has been disappointed and concerned rather than angry.

Having previously discussed Krugman's criticisms of 538, it's useful to compare them to his reaction to Superfreakonomics. For me, at least, the difference in tone is notable.

From
A counterintuitive train wreck
At first glance, though, what it looks like is that Levitt and Dubner have fallen into the trap of counterintuitiveness. For a long time, there’s been an accepted way for commentators on politics and to some extent economics to distinguish themselves: by shocking the bourgeoisie, in ways that of course aren’t really dangerous. Ann Coulter is making sense! Bush is good for the environment! You get the idea.

Clever snark like this can get you a long way in career terms — but the trick is knowing when to stop. It’s one thing to do this on relatively inconsequential media or cultural issues. But if you’re going to get into issues that are both important and the subject of serious study, like the fate of the planet, you’d better be very careful not to stray over the line between being counterintuitive and being just plain, unforgivably wrong.

It looks as if Superfreakonomics has gone way over that line.


From
Superfreakonomics on climate, part 1
OK, I’m working my way through the climate chapter — and the first five pages, by themselves, are enough to discredit the whole thing. Why? Because they grossly misrepresent other peoples’ research, in both climate science and economics.
...
Yikes. I read Weitzman’s paper, and have corresponded with him on the subject — and it’s making exactly the opposite of the point they’re implying it makes. Weitzman’s argument is that uncertainty about the extent of global warming makes the case for drastic action stronger, not weaker. And here’s what he says about the timing of action:

Again, we’re not even getting into substance — just the basic issue of representing correctly what other people said.
The conventional economic advice of spending modestly on abatement now but gradually ramping up expenditures over time is an extreme lower bound on what is reasonable rather than a best estimate of what is reasonable.


From

Weitzman in context
But you’d never get this point from the way the book quotes Weitzman, which cites his probability of utter catastrophe as if it were a reason to be skeptical of the need to act. I suspect, though I don’t know this, that the authors were just careless — they skimmed Weitzman’s paper, which is densely written, saw a number they liked, and didn’t ask what the number meant.

And that sort of carelessness is the general sense I get from the chapter.

Levitt now says that the chapter wasn’t meant to lend credibility to global warming denial — but when you open your chapter by giving major play to the false claim that scientists used to predict global cooling, you have in effect taken the denier side. The only way I can reconcile what Levitt says now with that reality is that he and Dubner didn’t do their homework — not only that they didn’t check out the global cooling stuff, the stuff about solar panels, and all the other errors people have been pointing out, but that they didn’t even look into the debate sufficiently to realize what company they were placing themselves in.

And that’s not acceptable. This is a serious issue. We’re not talking about the ethics of sumo wrestling here; we’re talking, quite possibly, about the fate of civilization. It’s not a place to play snarky, contrarian games.

From

Superfreakingmeta
One good aspect of the controversy, though, has been some broader analysis of what it all means. I liked three recent comments in particular.

Joshua Gans identifies in Dubner and Levitt an odd inconsistency that I’ve identified more broadly: those who go on and on about how people respond to incentives when they’re making a pro-free-market argument suddenly seem to lose all faith in the power of incentives when the goal is to induce more environmentally friendly behavior:

But come on. Isn’t the whole point of the Freakonomics project that prices work and behaviour changes in response to incentives? Everywhere else, a few pennies will cause massive consumption changes while when it comes to a carbon price, it is all too hard.

Ryan Avent makes a general point about people who dismiss cap-and-trade as too hard, then promote something else that only seems easier because you haven’t thought it through. I agree with him about the carbon tax issue; and while I hadn’t thought about applying the same principle to geoengineering, he’s completely right. Having somebody — who? The United States? The United Nations? The Coalition of the Willing? — pump sulfur into the atmosphere through an 18-mile tube, or cut off sunlight with a giant orbital mirror, would either (a) require many years of hard negotiations or (b) quite possibly set off World War III. If it’s (a), why is that so much easier than a global agreement on emissions? (Which, as Brad points out, really would only have to involve four big players.)

Finally, Andrew Gelman poses a question:

The interesting question to me is why is it that “pissing off liberals” is delightfully transgressive and oh-so-fun, whereas “pissing off conservatives” is boring and earnest?

I have a theory here, although it may not be the whole story: it’s about careerism. Annoying conservatives is dangerous: they take names, hold grudges, and all too often find ways to take people who annoy them down. As a result, the Kewl Kids, as Digby calls them, tread very carefully when people on the right are concerned — and they snub anyone who breaks the unwritten rule and mocks those who must not be offended.

Annoying liberals, on the other hand, feels transgressive but has historically been safe. The rules may be changing (as Dubner and Levitt are in the process of finding out), but it’s been that way for a long time.

The “tell”, I’d suggest, is that once you get beyond those for whom the decision about whom to laugh at is a career move, people don’t, in fact, seem to find mocking liberals funnier than mocking conservatives. Jon Stewart and Stephen Colbert are barreling along, while right-wing attempts to produce counterpart shows have bombed.

Anyway, say this for Dubner and Levitt — they’ve provoked an interesting discussion, although probably not the one they hoped for.



From

Elizabeth Kolbert can’t say that, can she?
But mainly, I’m envious. [Elizabeth] Kolbert builds the essay around an extended metaphor involving, um, equine effluvia that I’m pretty sure wouldn’t be allowed under Times style. On the whole, the requirement that Times writers show appropriate dignity is good for everyone; still, sometimes I’m wistful.

Oh, and the reference in the title of this post is to the much-missed Molly Ivins.

Tuesday, April 22, 2014

What do grades measure?

[I wrote this in the middle of the big SAT thread and I thought I had posted it weeks ago but it appears that I never got around to it. So better late than never...]

As discussed before,  many of the calls for getting rid of the SAT use the argument that high school grades are a better indicator of college success so we don't need the SAT. There's a modeling fallacy here (also as previously discussed), but putting that aside, the suggestion that we should rely almost entirely on grades as a measure of academic accomplishment (not to be confused with measures of character and personal achievement) raises the question of what  exactly do grades measure? Put another way, what factors do we expect to be highly correlated with grade-point average?

First off, let's think about where grades come from. In most classes grades come almost entirely from tests, homework, in-class activities and writing assignments. In some cases there is an unavoidable subjective element in the evaluation of this work. With this in mind, think about what attributes and personality traits would correlate with higher performance.

Various forms of intelligence would doubtless factor in. This is an extraordinarily complex topic, but, in general, it's safe to say that school tends to easier if you're smarter.

The ability to memorize would possibly be an even bigger factor in many (perhaps most) courses. Closely related to this attribute and in some cases indistinguishable from it by many metrics would be the tolerance for the act of rote memorization. Lots of people with excellent memories find the act of sitting and going over the same facts again and again extremely unpleasant. Put another way, this is one of the many areas where hard work can compensate for a lack of aptitude.

This second attribute overlaps with the next major related categories: discipline, patience and focus. A great deal of academic success depends on the ability to spend large amounts of time going over material that is neither interesting nor challenging. (This can lead to the paradoxical but not uncommon result of high aptitude leading to boredom leading to poor performance in the area of that aptitude.) I'd argue these factors are often the dominant drivers of GPA.

Because of the unavoidable element of subjectivity, the halo effect and likability can also improve grade point averages.

Between the level of material covered and the need to fashion lessons and tests to serve large numbers of students, grades often tend to favor conventional thinkers over more original ones. As students progress through college, the emphasis tends to shift to more divergent learning but at least in high school, the student who thinks differently will often be penalized.

Personal stability and home life can also be a major factor, particularly in areas like homework and other out-of-class assignments.

Finally, there is the support network: quality of instruction; availability of tutors and homework assistance; libraries and learning centers; computers with good reliable Internet access; family members who have both the time and the ability to help explain assignments.

From an analytic standpoint, it would be nice if we had separate metrics for each of these aspects. As it is, we really can't distinguish between the the student with exceptionally good grasp of material in the a student who worked hard or who had a lot of help..

This is not a call for reforming all our grading system. Though there is certainly room for improvement, it is far from the most pressing matter we face and, more importantly, badly thought-out changes (and badly thought-out has been the reform norm lately) can do far more harm than good.

What we do need to do with this or any other ranking system is try to understand its drivers and limitations and to take steps to minimize the damage caused by mistakes (because mistakes will happen).

Monday, April 21, 2014

What Nate Silver's critics are actually saying

Regarding the ongoing 538 discussion, it appears that we may be talking across each other in this case (from a previously mentioned comment by Kaiser Fung):

"The level of rigor that Krugman and others demand requires years, perhaps decades, of research to write one piece; meanwhile, the other critique is the content is not timely. Think about the full-time journalists he has hired - there isn't a way to pay them enough to do the kind of pieces that are being imagined. As we all know, data collection, cleaning and analysis take a huge amount of time. It may be months of work to get one article out."

Other than Krugman, I'm not sure exactly whom Kaiser was referring to in that first group but I assume, since it was a comment on my post, that I'm in there somewhere (and given my other comments, it's certainly not in the timely group). The trouble is, as far I can tell, I haven't said anything like this and Krugman has actually said the opposite.
Similarly, climate science has been developed by many careful researchers who are every bit as good at data analysis as Silver, and know the physics too, so ignoring them and hiring a known irresponsible skeptic to cover the field is a very good way to discredit your enterprise. Economists work hard on the data; on the whole you’re going to do better by tracking their research than by trying to roll your own, and you should be very wary if your analysis runs counter to what a lot of professionals say.
In other words, when reporting on a field outside of their expertise, 538's writers should forgo all that original "data collection, cleaning and analysis," and instead report on serious research being done by experts in the field (and it's worth noting that when Krugman talks about listening to experts earlier in the post, he links to the Monkey Cage).

So this won't look like cherry-picking, I'll be as transparent and inclusive as possible. As far as I can tell, Krugman wrote four posts relevant to this discussion. Here are the name and date of each along with quotes and a summary:

Sergeant Friday Was Not A Fox
 MARCH 18, 2014, 7:55 AM
What worries me, based on what we’ve seen so far — which isn’t much, but shouldn’t the site have debuted with a bang? — is that it looks as if the Silverites have misunderstood their mission.

Nate’s manifesto proclaims his intention to be a fox, who knows many things, rather than a hedgehog, who knows just one big thing; i.e., a pundit who repeats the same assertions in every column. I’m fine with that.

But you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)
We could go back and forth about how it applies in this case, but every serious STEM blogger I know of holds to the "hard thinking" standard. To do any less is to sink to the level of "Numbers in the News" infographics. Still more important (for me at least), is the part about implicit assumptions. The problem is particularly worrisome when experts jump fields, which leads neatly into the next post.


Further Thoughts on Hedgehogs and Foxes
 MARCH 18, 2014, 4:15 PM
Now, about FiveThirtyEight: I hope that Nate Silver understands what it actually means to be a fox. The fox, according to Archilocus, knows many things. But he does know these things — he doesn’t approach each topic as a blank slate, or imagine that there are general-purpose data-analysis tools that absolve him from any need to understand the particular subject he’s tackling. Even the most basic question — where are the data I need? — often takes a fair bit of expertise; I know my way around macro data and some (but not all) trade data, but I turn to real experts for guidance on health data, labor market data, and more.

What would be really bad is if this turns into a Freakonomics-type exercise, all contrarianism without any appreciation for the importance of actual expertise. And Michael Mann reminds me that Nate’s book already had some disturbing tendencies in that direction.
As before, we can discuss the merits of the Freakonomics school of scientific writing (at the  risk of oxymoron, I am consistently against constant contrarianism) and argue about the applicability of these charges against 538 (though in this case, Krugman is careful to phrase these as concerns), but this passage in no way matches what Krugman is supposed to have said.


Tarnished Silver
 MARCH 23, 2014, 10:48 AM
But I’d argue that many of the critics are getting the problem wrong. It’s not the reliance on data; numbers can be good, and can even be revelatory. But data never tell a story on their own. They need to be viewed through the lens of some kind of model, and it’s very important to do your best to get a good model. And that usually means turning to experts in whatever field you’re addressing.

Unfortunately, Silver seems to have taken the wrong lesson from his election-forecasting success. In that case, he pitted his statistical approach against campaign-narrative pundits, who turned out to know approximately nothing. What he seems to have concluded is that there are no experts anywhere, that a smart data analyst can and should ignore all that.

I've seen others make this Politico-fallacy argument (i.e. Silver's experience dealing with the idiots who had been doing sports and election prognostication has left him with a skewed view of the world). There's probably some truth there but I think it's an oversimplification.

Data as Slogan, Data as Substance
 MARCH 26, 2014, 1:00 PM
Noah Smith has the definitive piece on what’s wrong, so far, with the new FiveThirtyEight. For all the big talk about data-driven analysis,what it actually delivers is sloppy and casual opining with a bit of data used, as the old saying goes, the way a drunkard uses a lamppost — for support, not illumination.

In sum, this so-called “data-driven” website is significantly less data-driven (and less sophisticated) than Business Insider or Bloomberg View or The Atlantic. It consists nearly entirely of hedgehoggy posts supporting simplistic theories with sparse data and zero statistical analysis, making no quantitative predictions whatsoever. It has no relationship whatsoever to the sophisticated analysis of rich data sets for which Nate Silver himself has become famous. 
The problem with the new FiveThirtyEight is not one of data vs. theory. It is one of “data” the buzzword vs. data the actual thing.

This is perhaps the closest we get to the alleged demands for Silver to deliver more sophisticated analysis but it falls far short of the "months of work to get one article out" that Krugman was supposed to have ask for (The very fact that Business Insider or Bloomberg View or The Atlantic are able to do it shows that it is doable) and, more importantly, it came, not from Krugman but from the pleasant and well-liked Smith.

To summarize Krugman's position, data should be viewed in context as part of an argument or analysis. Part of that context should be the mainstream research be done in an area and when the writer is not an expert in that field, he or she should seek one out. On a related note, pieces that assert that the experts have missed the obvious (Freakonomics-style contrarianism) should be checked carefully, as should implicit assumptions.

I am broadly in agreement with Krugman on these points (particularly with Freakonomics-style journalism) though I would add a few more concerns that go along with some long-running threads here at the blog. The first involves scale. We should limit criticisms to choices, not circumstances, and in most enterprises some of the most important choices made regard size and scope.

I believe Silver may have fallen into the closely related traps of the growth fetish and the Devil's Candy (the latter being the ratcheting effect where meeting certain scale targets require changes which in turn require even larger scale targets). Something similar but probably more damaging occurred when he expanded the scope. As long as he was primarily writing or editing politics and sports stories (areas where he has extraordinary expertise), it was much easier for him to maintain a high level of quality control.

As far as I can tell, all of the real low points of the new 538 have occurred outside of these specialties (I know that Benjamin Morris' analysis of NBA steals caught a lot of flack but, while flawed, it struck me as a reasonable effort). The most embarrassing has been the hiring of Roger Pielke Jr., whose prebutted*  climate change piece has done more than anything else to damage the brand that Silver worked so hard for so many years to build.

My second big concern (which is somewhat more in line with Krugman) is with bungee jumping analysts. Experts (usually economists, often physicists, though Pielke shows that political scientists can also get into the act) who think that, because they have occasionally used some similar statistical methods, they are fully qualified in fields where they have no background or experience. Emily Oster's work with fetal alcohol syndrome and the notorious Freakonomics drunk driving analysis are apt examples.

Obviously, we can go back about these criticisms, both on a general level (for example, is there such a thing as Freakonomics-style contrarianism and, if so, is it bad?) and a specific one (has 538 really been moving in the direction suggested by Smith, Krugman and me?). A good, vigorous discussion of these points would be tremendously helpful, but any productive counterargument has got to start by countering actual arguments.

* From the article linked to above:
But just as Pielke’s article has been written before, so too it has been criticized before. Dr. Kevin E. Trenberth, a distinguished senior climate scientist at the National Center for Atmospheric Research, has criticized Pielke’s data for its simplistic nature. Simply showing that an increase in damage has corresponded to an increase in wealth ignores the fact that communities are now more prepared than ever for extreme storms, Trenberth wrote at the time.


Note: Somehow  my attempt to schedule this for a future date turned into a publish now command, so the first dozen or so people got to see a few extra typos.

Friday, April 18, 2014

Good post on Vox about issues with ordinal variables

This was a very good article tackling the issues of trying to assign an ordinal score to a multi-dimensional variable.  Mark has been saying this for years, already, but it is good to see statements like this coming out of more mainstream groups:
The problem with ordinal rankings — and the more variables, the more problems here — is that it implies meaningful differences between one job and the next one that is one ranking below it. You can definitively say that one job pays more than another, but is it true that clinical social worker is better than nail technician is better than middle school teacher, as US News' rankings imply? And even if somehow that were empirically provable, what's the practical application of this knowledge? Should the middle school teacher go be a social worker?
Rankings lists can occasionally provide useful functions but it is good to see more discussion of the limitations of these measures.  Now who is brave enough to do this with post-secondary education?

Thursday, April 17, 2014

Gauss, the fox who decided to be a hedgehog

As mentioned before, I'm not entirely comfortable with the fox/hedgehog spectrum -- this isn't a concept that reduces readily to a scalar -- but as long as we're here...

One of the minor revelations of the recent discussion of the new 538 was that Andrew Gelman had posted on the subject of foxes and hedgehogs way back in 2005:
This got me thinking about statisicians. I think we’re almost all foxes! The leading stasticians over the years all seem to have worked on lots of problems. Even when they have, hedghehog-like, developed systematic ideas over the years, these have been developed in a series of applications. It seems to be part of the modern ethos of statistics, that the expected path to discovery is through the dirt of applications.

I wonder if the profusion of foxes is related to statistics’s position, compared to, say, physics, as a less “mature” science. In physics and mathematics, important problems can be easy to formulate but (a) extremely difficult to solve and (b) difficult to understand the current research on the problem. It takes a hedgehog-like focus just to get close enough to the research frontier that you can consider trying to solve open problems. In contrast, in statistics, very little background is needed, not just to formulate open problems but also to acquire many of the tools needed to study them. I’m thinking here of problems such as how to include large numbers of interactions in a model. Much of the progress made by statisticians and computer scientists on this problem has been made in the context of particular applications.

Going through some great names of the past:

Laplace: possibly hedgehog-like in developing probability theory but I think of him as foxlike in working on various social-statistics applications such as surveys, that gave him the motivation needed to develop practical Bayesian methods.

Gauss: least-squares is a great achievement, but developed as a particular mathematical tool to solve some measurement error problems. In the context of his career, his statistical work is foxlike.

Galton: could be called a “hedgehog” for his obsession with regression, but I think of him as a fox with all his little examples.

Fisher: fox. Developed methods as needed. Developed theory as appropriate (or often inappropriate).

Pearson: the family of distributions smells like a hedgehog, but what’s left of it, includng chi-squared tests, looks like fox tracks.

Neyman: perhaps wanted to be a hedgehog but ultimately a fox, in that he made contributions to different problems of estimation and testing. I’d say the same of Wald and the other mid-century theorists: they might have wanted to be hedgehogs but there was no “theory of relativity” out there for them to discover, so they seem like foxes to me.
I can't really argue with as framed here. Gelman seems to be using a slightly different definition than the standard know-about-many-things/know-about-few-things, but the idea of foxes coming up with advances to respond to different applications in different fields. Still, there is a certain irony in describing Gauss as one who was "interested in everything, and move[d] easily from one problem to another." In terms of ability, this is undeniably true, but in another sense you could argue that few have ever sacrificed more to specialize.

Perhaps the ultimate fox was Leibniz who, in addition to that whole calculus thing...
made major contributions to physics and technology, and anticipated notions that surfaced much later in philosophy, probability theory, biology, medicine, geology, psychology, linguistics, and computer science. He wrote works on philosophy, politics, law, ethics, theology, history, and philology. Leibniz's contributions to this vast array of subjects were scattered in various learned journals, in tens of thousands of letters, and in unpublished manuscripts. He wrote in several languages, but primarily in Latin, French, and German. There is no complete gathering of the writings of Leibniz.
Gauss very probably could have given him a run for his money because (and this is the amazing part) he was roughly as gifted with language as he was with math. He appears to have been conversant in well over a dozen. Gauss was almost twenty (and, among other things, already had least squares under his belt) before he finally decided he should leave philology as a hobby and focus on the sciences. I can't find a reference for this but I seem to recall that he explicitly said he didn't want to repeat Leibniz's mistake and allow himself to be distracted by tackling too many subjects.

The standard response here is to chuckle and say it looks like he made the right choice, but I'm not so sure. People who actually know what they're talking about might disagree, but I wonder how long it would have taken to fill the hole that would have been left if Gauss had diverted a few years of serious thinking from mathematics to linguistics. Not to say that any of his mathematical work was unimportant, but, with the notable exception of number theory where he still casts a long and very distinct shadow, wouldn't most of the things we call Gaussian have still arrived, albeit a few years later (and at the same time in those cases where researchers came up with the same ideas later but published them first)?

By comparison, think about what someone like Gauss might have done with five or ten years of serious linguistic research. The results could have disappointed but just think of the potential.

I'm way out of my depth here, so I'm just hoping to raise some points for discussion (and not say anything stupid in the process). There is, however, one point I would like to make. Hedgehog/fox conversations get complicated quickly and can shift radically when you change the way you frame a question. As a mathematician and a physicist, Gauss was certainly a fox, but when you look at the constraints he put on himself -- choosing not to do any work in any area where he had such incredible aptitude -- he certainly did his best to be a hedgehog.

Monday, April 14, 2014

Zombie Alert

Dean Starkman had an interesting piece in the New Republic on the financial crisis. At least, it held my interest until I came across one one those things that annoy the hell out of me.
This attitude also has a literary pedigree. Cultural theorizing about our inherent weakness goes back to the Bible (see Genesis 2:4-3:24, Adam / Eve), but it was Scottish journalist Charles Mackay who most famously dissected the specific phenomenon of contagious folly in his 1841 classic, Extraordinary Popular Delusions and the Madness of Crowds. Mackay’s work chronicled episodes of mass hysteria—witch hunts, the crusades, alchemy. But most famously, Mackay gave us the parable of the Dutch tulip mania of 1636 to 1637, when flower bulbs briefly became one of the world’s most expensive commodities. A wry and witty stylist, Mackay mixes anecdotes—like one about a sailor who mistook a priceless tulip bulb for an onion to go with his herring breakfast—with mordant observations about human nature. “In reading the history of nations, we find that, like individuals, they have their whims and their peculiarities; their seasons of excitement and recklessness, when they care not what they do,” Mackay’s preface to the 1852 edition begins. “We find that whole communities suddenly fix their minds upon one object, and go mad in its pursuit.”

Mackay has attracted plenty of support from academics over the decades, particularly scholars of social psychology, and in the years since the crash, his work has been much cited as a master theory of what went wrong. People are greedy. What can you do?

There’s just one problem: The accounts that undergird Mackay’s thesis might be wrong. As Andrew Odlyzko, a University of Minnesota mathematician who studies financial panics, puts it, Madness “enjoys extraordinarily high renown in the financial industry and among the press and the public. It also has an extraordinarily low reputation among historians.” Peter Garber, a Brown economist, found in a 1990 paper that the most intense speculation in the Dutch tulip market of the era involved only the rarest bulbs, which had been infected by a certain virus that produced particularly intricate patterns in the flower. After that, the market behaved pretty much the way the market for rare bulbs always behaves. Prices for newly cultivated bulbs were high, then fell over time. In fact, the average decline in prices for the rarest bulbs in the five years after the tulip market crashed was, at most, 32 percent. “Large, but hardly the stuff that legends are made of,” Garber writes.
There is at least major one problem with Starkman's "just one problem." The accounts that undergird Mackay’s thesis about market bubbles are only tangentially related to tulipmania. Extraordinary Popular Delusions does discuss a couple of market bubbles in some detail, but those are both land bubbles. The first concerned France's Mississippi Company; the second focused on the South Sea Company in Britain . After spending about seventy pages on these crises, Mackay closes with a brief and relatively light seven pages on the market for tulip bulbs.

Not only does tulipmania play a relatively trivial role in Mackay's discussion of markets;  speculative bubbles play a secondary role in his short discussion of  the tulip market. The main focus here is on collector psychology and the tendency to highly value the rare and fragile simply because it's rare and fragile. Here's the opening of the chapter:
The tulip,--so named, it is said, from a Turkish word, signifying a turban,--was introduced into western Europe about the middle of the sixteenth century. Conrad Gesner, who claims the merit of having brought it into repute,--little dreaming of the commotion it was shortly afterwards to make in the world,--says that he first saw it in the year 1559, in a garden at Augsburg, belonging to the learned Counsellor Herwart, a man very famous in his day for his collection of rare exotics.

The bulbs were sent to this gentleman by a friend at Constantinople, where the flower had long been a favourite. In the course of ten or eleven years after this period, tulips were much sought after by the wealthy, especially in Holland and Germany. Rich people at Amsterdam sent for the bulbs direct to Constantinople, and paid the most extravagant prices for them. The first roots planted in England were brought from Vienna in 1600. Until the year 1634 the tulip annually increased in reputation, until it was deemed a proof of bad taste in any man of fortune to be without a collection of them. Many learned men, including Pompeius de Angelis and the celebrated Lipsius of Leyden, the author of the treatise "De Constantia," were passionately fond of tulips. The rage for possessing them soon caught the middle classes of society, and merchants and shopkeepers, even of moderate means, began to vie with each other in the rarity of these flowers and the preposterous prices they paid for them. A trader at Harlaem was known to pay one-half of his fortune for a single root, not with the design of selling it again at a profit, but to keep in his own conservatory for the admiration of his acquaintance.

One would suppose that there must have been some great virtue in this flower to have made it so valuable in the eyes of so prudent a people as the Dutch; but it has neither the beauty nor the perfume of the rose--hardly the beauty of the "sweet, sweet-pea;" neither is it as enduring as either. Cowley, it is true, is loud in its praise. He says--

   "The tulip next appeared, all over gay,
    But wanton, full of pride, and full of play;
    The world can't shew a dye but here has place;
    Nay, by new mixtures, she can change her face;
    Purple and gold are both beneath her care,
    The richest needlework she loves to wear;
    Her only study is to please the eye,
    And to outshine the rest in finery."

This, though not very poetical, is the description of a poet. Beckmann, in his _History of Inventions_, paints it with more fidelity, and in prose more pleasing than Cowley's poetry. He says, "There are few plants which acquire, through accident, weakness, or disease, so many variegations as the tulip. When uncultivated, and in its natural state, it is almost of one colour, has large leaves, and an extraordinarily long stem. When it has been weakened by cultivation, it becomes more agreeable in the eyes of the florist. The petals are then paler, smaller, and more diversified in hue; and the leaves acquire a softer green colour. Thus this masterpiece of culture, the more beautiful it turns, grows so much the weaker, so that, with the greatest skill and most careful attention, it can scarcely be transplanted, or even kept alive."

Many persons grow insensibly attached to that which gives them a great deal of trouble, as a mother often loves her sick and ever-ailing child better than her more healthy offspring. Upon the same principle we must account for the unmerited encomia lavished upon these fragile blossoms.

This is followed by a discussion of the extreme run-up of prices usually referred to as tulipmania in which speculation plays a prominent but perhaps not central part (it's worth noting that even Garber says that there was at least a brief period intense speculation that extended to common bulb prices), but Mackay keeps coming back to the desire to possess these flowers rather than to resell them for a profit. To drive home the point, Mackay closes with a paragraph on the continued high prices bulbs can fetch. "In England, in our day, strange as it may appear, a tulip will produce more money than an oak. If one could be found, _rara in terris_, and black as the black swan of Juvenal, its price would equal that of a dozen acres of standing corn."

Starkman seems to feel that Mackay's moralizing about bubbles is a form of blaming the victim. Perhaps he's right, but in order to argue the point he'd have to talk about Mackay's accounts of the Mississippi Scheme or the South Sea Bubble. Instead, we get the almost obligatory tulip reference. It was bad enough when writers were overusing this trivial and not particularly relevant case as an example of a speculative bubble; it's even worse when they use it to deny bubbles' existence.

Tulipmania is another one of those rhetorical zombies we need to kill off for good.

P.S. Though not directly related to the main point of the post, it's worth noting that, while Starkman seems to be accusing Mackay of seeing bubbles that aren't there, one of the two scholars he quotes, Andrew Odlyzko, was actually accusing Mackay of choosing not to see bubbles that were there.
Charles Mackay's book "Extraordinary Popular Delusions and the Madness of Crowds" enjoys extraordinarily high renown in the financial industry and among the press and the public. It also has an extraordinarily low reputation among historians.

This paper argues that Mackay's sins of commission were dwarfed by his sins of omission. He lived through several giant investment manias in Britain, yet he did not discuss them in his books. An investigation of Mackay's newspaper writings shows that he was one of the most ardent cheerleaders for the Railway Mania, the greatest and most destructive of these episodes of extreme investor exuberance.

Mackay's story provides another example of a renowned expert on bubbles who decides that "this time is different." His moves through a sequence of delusions help explain the length and damage of the Railway Mania. He was a free market and technology enthusiast, and faced many issues that are important today, such as government ownership or regulation, interconnection, standardization, structural separation, and analogs to net neutrality. A crushing national debt and high unemployment in an economy pulling out of a deep depression (and in perceived danger of falling into another one) were very important in shaping attitudes towards railway expansion. The analogies and contrasts between Mackay's time and ours are instructive.

Saturday, April 12, 2014

Weekend blogging -- getting VORPal

Ken Levine's blog is one of the go-to references for those interested in the business, history and art of television. As you can see from this bio, he's ludicrously overqualified to write on the subject.
Named one of the BEST 25 BLOGS OF 2011 by TIME Magazine. Ken Levine is an Emmy winning writer/director/producer/major league baseball announcer. In a career that has spanned over 30 years Ken has worked on MASH, CHEERS, FRASIER, THE SIMPSONS, WINGS, EVERYBODY LOVES RAYMOND, BECKER, DHARMA & GREG, and has co-created his own series including ALMOST PERFECT starring Nancy Travis. He and his partner wrote the feature VOLUNTEERS. Ken has also been the radio/TV play-by-play voice of the Baltimore Orioles, Seattle Mariners, San Diego Padres. and has hosted Dodger Talk on the Dodger Radio Network.
(And by 'worked on,' he usually means 'played pivotal role in the making of.') That last part of the resume means that Levine also has strong and generally well-thought-out opinions on baseball, particularly when it comes to what it takes to make games into good broadcasting.
Statistics have always been a big part of baseball. And a major crutch for announcers who have no imagination and nothing else to fill time with. Now with Sabermetrics and more detailed categories like VORP, DRS, FIP, EQA, WHIP and WAR number crunching has been taken to a whole new level. Not that these new stats aren’t informative and useful, but there is an avalanche of them. Certainly way more than the average baseball fan can process or wants to process.

And now the Houston Astros have mandated that these analytics be a prerequisite to their broadcasts. I feel especially sorry for their longtime TV announcer, Bill Brown. He’s a terrific play-by-play man. But now saddled with this emphasis on modern-day stats and a bad team, this was the rating for the Astros’ telecast last Monday against the Los Angeles Angels: 0.0. Let me repeat that number. 0.0. And this isn’t the end of the season when the team is mathematically eliminated. It’s their first homestand.  How is that even possible?  (And it wasn't the first time.)

Yeah, WHIP and WAR really save the day.
...
Statistics are fine in key game situations. Especially if the games have import. Playoff games, for example. Ninth innings.  Pennant races.   They can enhance a big moment.  But breaking down a batter’s average against a certain pitcher when he’s had only six at bats against him and it’s the second inning of a game in mid April – who gives a shit?

Why cater your broadcast to the diehard fans? A) There are not that many of them. B) They’ll listen no matter what you do. C) You chase away casual fans. Women (50.8% of the American population), in particular, tend not to care about Wins Above Replacements. 

Thursday, April 10, 2014

538 and Vox

Kaiser Fung made a comment in this thread:
LIke Andrew, I also have been thinking about this, and I come out on the side of Nate. Individually, the critique stands but taken together, they don't call for any coherent vision of how his critics would run an operation such as his. The level of rigor that Krugman and others demand requires years, perhaps decades, of research to write one piece; meanwhile, the other critique is the content is not timely. Think about the full-time journalists he has hired - there isn't a way to pay them enough to do the kind of pieces that are being imagined. As we all know, data collection, cleaning and analysis take a huge amount of time. It may be months of work to get one article out. Further, I'd like to judge them relative to competitors rather than in some kind of abstract universe. Compared to the Freakonomics blog, for example, 538 has a much better orientation. Compare to Huffington Post - when did HP have any real data journalism? Compare to Buzzfeed, don't even want to talk about it.
Now, this is Joseph and not Mark.  My view was that you simply cannot judge a publication until there has been six months or so to let things settle.  I suspect a lot of the criticism was driven by the climate change article -- and it is interesting to see that this is where people's passions are the highest. 

Other columnists, like Emily Oster, are much more subtle cases.  I was very dismissive of Emily after her first foray into public health.  Her second has seen a lot of criticism as well, but what is different is that the current round is based on careful weighing of evidence and very subtle issues of interpretation (and this was only for a single, small piece of a much larger work).  She is getting a lot better. 

And that is part of where I am optimistic about Nate Silver.  He is doing something really hard and it remains to be seen if the criticism slowly improves matter. 

In a lot of ways, the other new information based news source (Vox) has the exact opposite problem.  They spent a huge amount of time trying to make some really good pieces (like this one) and grab some of the people I used to read elsewhere (even obscure ones like this).  But it will be interesting to see if they can keep up the kick-off level of quality over time. 

So I guess the really good news is that we are spoiled for choice with new, information rich, media start-ups.  It's hard to see how this is a bad thing. 

Wednesday, April 9, 2014

The Hedgehog who thought he was a fox -- a cautionary tale

The growing chorus of Nate Silver fans critical of (or at least perplexed by) the new Five Thirty Eight have caught a great deal of media coverage, mainly for the wrong reasons. Conservatives have painted it as a case of liberals turning on one of their own. Pundits have tried to use the recent critiques of Silver to undercut his earlier, completely unrelated critiques of them (I'm debating whether or not to write a post on Dylan Byers' laughable misreading of Krugman's position. On one hand, it's bad enough to support a post. On the other hand, I'm busy, Charles Pierce already did a good job with it and I'm pretty sure that most people already know what Byers is).

There has been some good work on the subject. Jonathan Chait does a sharp analysis of Krugman's and Silver's personalities and how they shaped the conflict (best line: "Somewhere, David Brooks is reading Silver’s argument that Paul Krugman refuses to attack his colleagues and laughing bitterly."), but other (for me) more interesting issues have gotten less coverage than they merit, things like the culture of statistics, the often depressing career paths promising thinkers take these days* and the dangers of a bad analogy.

It sometimes seems that there's a convention that once a debate has been framed, that framework must be respected, no matter how badly it holds up. Case in point, the fox and the hedgehog. Here's how Silver puts it:
Our logo depicts a fox (we call him Fox No. 92) as an allusion to a phrase originally attributed to the Greek poet Archilochus: “The fox knows many things, but the hedgehog knows one big thing.” We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.
This is doubly flawed analogy. Expertise is a spiky, complicated thing that doesn't lend itself to scalar measures, let alone binomial. Any attempt to assign people positions on the fox/hedgehog spectrum will be problematic at best with order shifting radically when weighting schemes change. If we do, however, decide to view the world through this framework, we immediately come to an even bigger objection to in Silver's arguments:

Nate Silver is a hedgehog.

There is nothing pejorative about this classification. Silver has done brilliant work. It's just that almost all of Silver's best work has been done using a small but powerful set of analytic tools to address thorny but structurally similar problems in sports and politics. In terms of methods, Silver is a specialist; in terms of data, he's a micro-specialist. Silver has an enormous body of knowledge about working with player stats or polling data, but most of that knowledge is completely field specific.

There's nothing wrong with this kind of specialization -- its absolutely necessary for the kind of results Silver produced -- but it can cause problems when researchers move out of their areas of expertise and fail to adjust for the change. In other words, the trouble starts when hedgehogs think they're foxes.

Being a fox means living with the constant fear that you've just done something stupid that will be immediately obvious to anyone knowledgeable in the field. Ideally that fear leads to a heightened feel for danger levels. Most experienced foxes have developed an instinct for when to seek out a hedgehog. As a corollary, a good fox is always (and I do mean ALWAYS)  more willing to ask a stupid question than to make a stupid statement.

For a case study of what can go wrong when experts leave their area of expertise and don't adjust their caution levels, you don't have to look any farther than Silver's attempt to cover the climate change debate. Michael E. Mann assesses the damage:
And so I was rather crestfallen earlier this summer when I finally got a peek at a review copy of The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't. It's not that Nate revealed himself to be a climate change denier; He accepts that human-caused climate change is real, and that it represents a challenge and potential threat. But he falls victim to a fallacy that has become all too common among those who view the issue through the prism of economics rather than science. Nate conflates problems of prediction in the realm of human behavior -- where there are no fundamental governing 'laws' and any "predictions" are potentially laden with subjective and untestable assumptions -- with problems such as climate change, which are governed by laws of physics, like the greenhouse effect, that are true whether or not you choose to believe them.
...
Unlike Levitt, Nate did talk to the scientists (I know. I'm one of them!). But he didn't listen quite as carefully as he should have. When it came to areas like climate change well outside his own expertise, he to some extent fell into the same "one trick pony" trap that was the downfall of Levitt (and arguably others like Malcolm Gladwell in The Tipping Point). That is, he repeatedly invokes the alluring, but fundamentally unsound, principle that simple ideas about forecasting and prediction from one field, like economics, can readily be appropriated and applied to completely different fields, without a solid grounding in the principles, assumptions, and methods of those fields. It just doesn't work that way (though Nate, to his credit, does at least allude to that in his discussion of Armstrong's evaluation of climate forecasts).
I'm singling out Silver here not because he's a bad statistician but because he's a very good one who fell into the increasingly common traps of believing that the world outside of his specialty is simpler and that, if you understand the math you automatically understand the problem. Each field is complex and , like Tolstoy's families, complex in its own way. If you want to have something useful to say in an unfamiliar area of research, knowing the statistics may be necessary but it is far from sufficient.

* On a related note you can find my thoughts on Five Thirty Eight's business model here.