Thursday, March 11, 2010

Propensity Score Calibration

I am on the road giving a guest lecture at UBC today. One of the topics I was going to cover in today's discussion was propensity score calibration (by the ever brilliant Til Sturmer). But I wonder -- if you have a true random subset of the overall population -- why not just use it? Or, if as Til assumes, the sample is too small, why not use multiple imputation? Wouldn't that be an equivalent technique that is more flexible for things like sub group analysis?

Or is it the complexity of the imputation in data sets of the size Til worked with that was the issue? It's certainly a point to ponder.

Worse than we thought -- credit card edition

For a while it looked like the one good thing about the economic downturn was that it was getting people to pay down their credit card debts. Now, according to Felix Salmon, we may have to find another silver lining:

Total credit-card debt outstanding dropped by $93 billion, or almost 10%, over the course of 2009. Is that cause for celebration, and evidence that U.S. households are finally getting their act together when it comes to deleveraging their personal finances? No. A fascinating spreadsheet from CardHub breaks that number down by looking at two variables: time, on the one hand, and charge-offs, on the other.

It turns out that while total debt outstanding dropped by $93 billion, charge-offs added up to $83 billion — which means that only 10% of the decrease in credit card debt — less than $10 billion — was due to people actually paying down their balances.

Tuesday, March 9, 2010

Perils of Convergence

This article ("Building the Better Teacher") in the New York Times Magazine is generating a lot of blog posts about education reform and talk of education reform always makes me deeply nervous. Part of the anxiety comes having spent a number of years behind the podium and having seen the disparity between the claims and the reality of previous reforms. The rest comes from being a statistician and knowing what things like convergence can do to data.

Convergent behavior violates the assumption of independent observations used in most simple analyses, but educational studies commonly, perhaps even routinely ignore the complex ways that social norming can cause the nesting of student performance data.

In other words, educational research is often based of the idea that teenagers do not respond to peer pressure.

Since most teenagers are looking for someone else to take the lead, social norming can be extremely sensitive to small changes in initial conditions, particularly in the make-up of the group. This makes it easy for administrators to play favorites -- when a disruptive or under-performing student is reassigned from a favored to an unfavored teacher, the student lowers the average of the second class and often resets the standards of normal behavior for his or her peers.

If we were to adopt the proposed Jack-Welch model (big financial incentitves at the top; pink slips at the bottom), an administrator could, just by moving three or four students, arrange for one teacher to be put in line for for achievement bonuses while another teacher of equal ability would be in danger of dismissal.

Worse yet, social norming can greatly magnify the bias caused by self-selection and self-selection biases are rampant in educational research. Any kind of application process automatically removes almost all of the students that either don't want to go to school or aren't interested in academic achievement or know that their parents won't care what they do.

If you can get a class consisting entirely of ambitious, engaged students with supportive parents, social norming is your best friend. These classes are almost (but not quite) idiot proof and teachers lucky enough to have these classes will see their metrics go through the roof (and their stress levels plummet -- those are fun classes to teach). If you can get an entire school filled with these students, the effect will be even stronger.

This effect is often stated in terms of the difference in performance between the charter schools and the schools the charter students were drawn from which adds another level of bias (not to mention insult to injury).

Ethically, this raises a number of tough questions about our obligations to all students (even the difficult and at-risk) and what kind of sacrifices we can reasonably ask most students to make for a few of their peers.

Statistically, though, the situation is remarkably clear: if this effect is present in a study and is not accounted for, the results are at best questionable and at worst meaningless.

(this is the first in a series of posts about education. Later this week, I'll take a look at the errors in the influential paper on Harlem's Promise Academy.)

Efficacy versus effectiveness

One of the better examples that I have found of this distinction is with physical activity. Travis Saunders talks about the difference between a closely monitored exercise program and encouraging exercise related behavior (despite randomization).

This should be a warning for those of us in drug research as well; not even randomization will help if you have a lot of cross-overs over time or if user tend to alter other behaviors as a result of therapy. This isn't very plausible with some drugs with few side effects (statins) but could be really important for others where the effects can alter behavior (NSAIDs). In particular, it makes me wonder about our actual ability to use randomized experiments of pain medication for arthritis (except, possibly, in the context of comparative effectiveness).

But it is worth thinking about when trying to interpret observational data. What else could you be missing?

Monday, March 8, 2010

Undead papers

Okay, so what do y'all do when a paper becomes undead? We all have work that stopped, for one reason or another, but really needs to be brought to a conclusion. Not even necessarily a happy conclusion (sometimes putting a project out of its misery is the kindest decision for all involved -- especially the junior scientist leading the charge). But sometimes it is the case that the results are just not that compelling (but it still deserves to be published in the journal of minor findings).

But I wonder what is the secret to motivation under these conditions?

Sunday, March 7, 2010

"Algebra in Wonderland" -- recommended with reservations

In today's New York Times, Melanie Bayley, a doctoral candidate in English literature at Oxford, argues that Lewis Carroll's Alice in Wonderland can be interpreted as a satire of mathematics in the mid-Nineteenth Century, particularly the work of Hamilton and De Morgan.

The essay has its share of flaws: none of the analogies are slam-dunk convincing (the claim that the Queen of Hearts represents an irrational number is especially weak); the omission of pertinent works like "A Tangled Tale" and "What the Tortoise Said to Achilles" is a bit strange; and the conclusion that without math, Alice might have been more like Sylvie and Bruno would be easier to take seriously if the latter book hadn't contained significant amounts of mathematics* and intellectual satire.

Those problems aside, it's an interesting piece, a great starting point for discussing mathematics and literature and it will give you an excuse to dig out your Martin Gardner books. Besides, how often do you get to see the word 'quaternion' on the op-ed page?


* including Carroll's ingenious gravity powered train.

Friday, March 5, 2010

When is zero a good approximation

I was commenting on Andrew Gelman's blog when a nice commentator pointed something out that I usually don't think much about: pharmacoepidemiology outcomes include both cost and efficacy.

Now, a lot of my work has been on older drugs (aspirin, warfarin, beta blockers are my three most commonly studied drugs) so I have tended to assume that cost was essentially zero. A years supply of aspirin for $10.00 is an attainable goal and so I have assumed that we can neglect the cost of therapy.

But does that make sense if we are talking a targeted chemotherapy? In such a case, we might have to weight not just the burden of additional adverse events but the cost of the medication itself.

It's becoming appalling clear to me that I don't have a good intuition of how to model this well. Making everything a cost and assuming a price on years of life lost is one approach but the complexity of pricing involved (and the tendency for relative costs to change over time) worried me about external validity.

I know what I will be thinking about this weekend!

Thursday, March 4, 2010

How are genetically engineered crops like AAA rated structured bonds?

Felix Salmon draws a clever analogy:

If you only grow one crop, the downside of losing it all to an outbreak is catastrophe. In rural Iowa it might mean financial ruin; in Niger, it could mean starvation.

Big agriculture companies like DuPont and Archer Daniels Midland (ADM), of course, have an answer to this problem: genetically engineered crops that are resistant to disease. But that answer is the agricultural equivalent of creating triple-A-rated mortgage bonds, fabricated precisely to prevent the problem of credit risk. It doesn’t make the problem go away: It just makes the problem rarer and much more dangerous when it does occur because no one is — or even can be — prepared for such a high-impact, low-probability event.

Valuing Pain

Readers of this blog will know that I have some concerns about the regulation of pain medications. The FDA continues to warn about the issue of liver injury when taking acetaminophen.

For a moment, let's ignore the case of people taking the drug inappropriately or for whom another medication would provide better symptom control. They exist and are relevant to policy discussions, but they distract from today's main thought.

We can measure liver damage and death (hard outcomes). We cannot easily measure pain -- what level of pain relief is worth a 1% chance of death?

So do we leave it up to individual judgment? Drugs can be confusing and acetaminophen (due to efficacy) is included in a lot of preparations (for important reasons). So what is the ideal balance between these two goals (prevent adverse events and relieving pain)?

It would be so much easier if pain were easy to measure . . .

Wednesday, March 3, 2010

p-values

Another nice critique of relying on p-values. There is also a fine example in the comments of why one should double check when they think things look odd. Often it is better to keep one's mouth shut and be thought a fool than to open it and remove all doubt.

Tuesday, March 2, 2010

Comparing Apples and Really Bad Toupees

DISCLAIMER: Though I have worked in some related areas like product launches, I have never done an analysis of brand value. What follows are a few thoughts about branding without any claim of special expertise or insight. If I've gotten something wrong here I would appreciate any notes or corrections.

Joseph's post reminded me of this article in the Wall Street Journal about the dispute between Donald Trump and Carl Icahn over the value of the Trump brand. Trump, not surprisingly, favors the high end:
In court Thursday, Mr. Trump boasted that his brand was recently valued by an outside appraiser at $3 billion.

In an interview Wednesday, Mr. Trump dismissed the idea that financial troubles had tarnished his casino brand. He also dismissed Mr. Icahn's claims that the Trump gaming brand was damaged, pointing to a recent filing in which Mr. Icahn made clear that he wants to assume the license to the brand. "Every building in Atlantic City is in trouble. OK? This isn't unique to Trump," he said. "Everybody wants the brand, including Carl. It's the hottest brand in the country."
While Icahn's estimate is a bit lower:
Mr. Icahn, however, believes his group also would have the right to use the Trump name under an existing licensing deal, but says the success of the casinos don't hinge on that. The main disadvantage to losing the name, he says, would be the $15 million to $20 million cost of changing the casinos' signs.
So we can probably put the value of the Trump brand somewhere in the following range:

-15,000,000 < TRUMP < 3,000,000,000

(the second inequality should be less than or equal to -- not sure how to do it on this text editor)

Neither party here is what you'd call trustworthy and both are clearly pulling the numbers they want out of appropriate places but they are able to make these claims with straight faces partly because of the nature of the problem.

Assigning a value to a brand can be a tricky thing. Let's reduce this to pretty much the simplest possible case and talk about the price differential between your product and a similar house brand. If you make Clorox, we're in pretty good shape. There may be some subtle difference in the quality between your product and, say, the Target store brand but it's probably safe to ignore it and ascribe the extra dollar consumers pay for your product to the effect.

But what about a product like Apple Computers? There's clearly a brand effect at work but in order to measure the price differential we have to decide what products to compare them to. If we simply look at specs the brand effect is huge but Apple users would be quick to argue that they were also paying for high quality, stylish design and friendly interfaces. People certainly pay more for Macs, Ipods, Iphones, and the rest, but how much of that extra money is for features and how much is for brand?

(full disclosure: I use a PC with a dual Vista/Ubuntu operating system. I do my programming [Python, Octave] and analysis [R] in Ubuntu and keep Vista for compatibility issues. I'm very happy with my system. If an Apple user would like equal time we'd be glad to oblige)

I suspect that more products are closer to the Apple end of this spectrum than the Clorox end but even with things like bleach, all we have is a snapshot of a single product. To useful we need to estimate the long term value of the brand. Is it a Zima (assuming Zima was briefly a valuable brand) or is it a Kellogg's Corn Flakes? And we would generally want a brand that could include multiple brands. How do we measure the impact of a brand on products we haven't launched yet? (This last point is particularly relevant for Apple.)

The short answer is you take smart people, give them some precedents and some guidelines then let them make lots of educated guesses and hope they aren't gaming the system to tell you what you want to hear.

It is an extraordinarily easy system to game even with guidelines. In the case of Trump's casinos we have three resorts, each with its own brand that interacts in an unknown and unknowable way with the Trump brand. If you removed Trump's name from these buildings, how would it affect the number of people who visit or the amount they spend?

If we were talking about Holiday Inn or even Harrah's, we could do a pretty good job estimating the effect of changing the name over the door. We would still have to make some assumptions but we would have data to back them up. With Trump, all we would have is assumption-based assumptions. If you take these assumptions about the economy, trends in gambling and luxury spending, the role of Trump's brand and where it's headed, and you give each one of them a small, reasonable, completely defensible nudge in the right direction, it is easy to change your estimates by one or two orders of magnitude.

We also have an unusual, possibly even unique, range of data problem. Many companies have tried to build a brand on a public persona, sometimes quite successfully. Normally a sharp business analyst would be in a good position to estimate the value of one of these brands and answer questions like "if Wayne Gretsky were to remove his name from this winter resort, what impact would it have?"

The trouble with Trump is that almost no one likes him, at least according to his Q score. Most persona-based brands are built upon people who were at some point well-liked and Q score is one of the standard metrics analysts use when looking at those brands. Until we get some start-ups involving John Edwards and Tiger Woods, Mr. Trump may well be outside of the range of our data.

Comparing apples and oranges

Comparing salaries across national borders is a tricky thing to do. I was reminded of this problem while reading a post from Female Science Professor. My experience has been limited to the US and Canada but, even there, it's hard to really contrast these places. When I worked in Montreal, I had easy access to fast public transit, most things in walking distance, inexpensive housing but a much lower salary. In Seattle I have reluctantly concluded that, given my work location, a car was essential.

So how do you compare salaries?

This is actually a general problem in Epidemiology. Socio-economic status is known to be an important predictor of health. But it is tricky to measure. Salary needs to be adjusted for cost of living; hard even when you have good location information (which, in de-identified data you may very well not). Even in large urban areas, costs can be variable depending on location.

Alternatively, there are non-financial rewards (that are status boosting) in many jobs; how do you weight these? Adam Smith noted back in the Wealth of Nations that the a prestigious position was related to lower wages. How do you compare equal salaries between a store clerk and a journalist?

Is a hard problem and I really lack a great solution. But it's worth putting some real thought into!!

Monday, March 1, 2010

"What bankers can learn from arc-welder manufacturers"

Felix Salmon points out the following from a book review from the Wall Street Journal:

Mr. Koller contends that layoffs deprive companies of profit-generating talent and leave the remaining employees distrustful of management—and often eager to find jobs elsewhere ahead of the next layoff round. He cites research showing that, on average, for every employee laid off from a company, five additional ones leave voluntarily within a year. He concludes that the cost of recruiting, hiring and training replacements, in most cases, far outweighs the savings that chief executives assume they're getting when they initiate wholesale firings and plant closings.

Having actually built some of the models that directly or indirectly determined hiring and layoffs, and more importantly having been the one who explained those models to the higher-ups, I very much doubt that most companies spend enough time looking at the hidden and long term costs of layoffs.

The book is Spark, by Frank Koller. Sounds interesting.

Selection Bias with Hazard Ratios

Miguel Hernan has a recetn article on the Hazards of Hazard Ratios. The thing that jumped to my attention was his discussion of "depletion of susceptibles". Any intervention can look protective, eventually, if speeds up disease in the susceptible such that the rate of events in that population eventually drops (as all of the members of the population able to have an event have had it).

I think that this element of hazards ratios illustrates two principles:

1) it always makes sense to begin the analysis of a medication at first use or else you can miss a lot

2) In the long run, we are all dead

So the real trick seems to be more focus on good study design and being careful to formulate problems with precision. Quality study design never goes out of style!

Nate SIlver debunks another polling myth

Here's the old chestnut (from Robert Moran):


In a two way race, political professionals don't even bother to look at the spread between the incumbent and the challenger, they only focus on the incumbent's support relative to 50%. Incumbents tend to get trace elements of the undecideds at the end of a campaign. Sure, there is the occasional exception, but this rule is fairly ironclad in my experience.


Here's Silver's takedown:


There are several noteworthy features of this graph:


1) It is quite common for an incumbent to be polling at under 50 percent in the early polling average; this was true, in fact, of almost half of the races (30 of the 63). An outright majority of incumbents, meanwhile, had at least one early poll in which they were at under 50 percent of the vote.


2) There are lots of races in the top left-hand quadrant of the graph: these are cases in which the incumbent polled at under 50 percent in the early polling average, but wound up with more than 50 percent of the vote in November. In fact, of the 30 races in which the incumbent had less than 50 percent of the vote in the early polls, he wound up with more than 50 percent of the vote 18 times -- a clear majority. In addition, there was one case in which an incumbent polling at under 50 percent wound up with less than 50 percent of the November vote, but won anyway after a small third-party vote was factored in. Overall, 19 of the 30 incumbents to have less than 50 percent of the vote in the early polling average in fact won their election.


3) 5 of the 15 incumbents to have under 45 percent of the vote in early polls also won their elections. These were Bob Menendez (38.9 percent), Tim Palwenty (42.0 percent), Don Carcieri (42.3 percent), Jennifer Granholm (43.4 percent) and Arnold Schwarzenegger (44.3 percent), all in 2006.3b) If we instead look at those cases within three points of Ted Strickland's 44 percent, when the incumbent had between 41 and 47 percent of the vote in early polls, he won on 11 of 17 occasions (65 percent of the time).


4) Almost all of the data points are above the red diagonal line, meaning that the incumbent finished with a larger share of the vote than he had in early polls. This was true on 58 of 63 occasions.


4b) On average, the incumbent added 6.4 percent to his voting total between the early polling average and the election, whereas the challenger added 4.5 percent. Looked at differently, the incumbent actually picked up the majority -- 59 percent -- of the undecided vote vis-a-vis early polls.


4c) The above trend seems quite linear; regardless of the incumbent's initial standing in the early polls, he picked up an average of 6-7 points by the election, although with a significant amount of variance.


5) The following corollary of Moran's hypothesis is almost always true: if an incumbent has 50 percent or more of the vote in early polls, he will win re-election. This was true on 32 of 33 occasions; the lone exception was George Allen in Virginia, who had 51.5 percent of the vote in early polls in 2006 but lost re-election by less than a full point (after running a terrible campaign). It appears that once a voter is willing to express a preference for an incumbent candidate to a pollster, they rarely (although not never) change their minds and vote for the challenger instead.