Thursday, June 6, 2013

Contests big and small

This is a follow up of sorts to the recent Project Gutenberg Project post, not in that it involves similar data or statistical techniques but in that both are part of a loose thread on the possible (and in some cases, inevitable) tranformative movements in statistics and research that are generating a lot of talk but perhaps not enough discussion.

Last time I had a post on how we might want to open big data. This time the focus is big contests. Things like the X-Prizes or proposals to offer awards for the development of new drugs. The qualifier 'big' is important; scale plays a significant role in the second half of the post and I'll get to some smaller contests then. For now though, let's consider big contests for huge organizations.

If you put aside reputational capital (which might be a big deal to a small start-up but which wouldn't have much impact on a big multinational), in order for contests to make sense, you're left with the following two inequalities:

Existing incentives < Cost of development

but

Existing incentives + (Prize * Probability of winning) > Cost of development + Uncertainty penalty

That's the picture for the competitors and it's hard to imagine them being satisfied that often.

What about the party offering the prize? For them, much of the benefit comes from the number of groups competing for the prize; presumably the quality of the winning entry should increase monotonically as you add competitors. Number of entries is, of course, inversely related to cost of development.

These contests have long had a special appeal for freshwater economists, but when you get past the open enrollment aspect, the process looks awfully bureaucratic. The competitions are driven by artificial criteria determined by a committee. What's more the criteria generally only have to be met once during a relatively short time frame rather than over the lifetime of the product. This produces a great incentive to cheat.

If you were to sit down and, given these factors, try to come up with the worst possible product to develop with a contest it would probably be in pharmaceuticals. Drug development has huge costs, is largely limited to a small pool of players and is highly vulnerable to fraud (fraud which comes with substantial social costs). Despite this drugs are one of the most frequently proposed applications of the contest model.

The various X-prizes are fortunately not vulnerable to fraud, but they are also poor choices for the contest model: massive projects (though, with the exception of the tricorder, not necessarily that ambitious from a technological standpoint) requiring huge start-up costs that can be undertaken by only a handful of competitors (think dozens instead of hundreds).

The contest model can, however, make a great deal of sense assuming you can meet the following conditions:

A relatively large pool of potential competitors;

Relatively low cost of development;

A relatively high secondary reward/cost of development ratio (secondary rewards include reputation, access to future contracts, publications and for students, grades and thesis material);

Largely ungameable criteria;

Significant positive externalities (for example, a text mining competition might increase the competitors' programming skills and thus increase the skilled labor pool).

For example, DARPA's Grand Challenge contests met all of these conditions. Hundreds of teams competed in the various events and almost certainly produced more quality research and innovative thinking than we would have gotten spending the same money in a straight grant arrangement.

Here, however we run into a different question of scale, not in participants but in number of projects. The Grand Challenge worked not only because it met the conditions listed above but because it took advantage of a certain slack in the system. There were enough people in engineering and computer science departments who were looking for an interesting, high-profile project and who didn't care that the expected value of the compensation didn't come close to paying for the time, facilities and equipment required to compete. How many of these competitions can we introduce before we start cannibalizing the talent pool?

When done badly, contest-based research makes little sense, but even when done well, it can't really be more than a small part of our overall research strategy.


2 comments:

  1. Are you familiar with Kaggle's predictive modeling competitions? I believe that fitting your criteria is a big part of why they work so well (both large and small ones). And I'd say we're doing much more to build the talent pool than we are cannibalizing it (due to the externalities you mentioned).

    However, we did learn that it isn't plausible for a majority of the world's data science work to get done through these competitions, which is why we started Kaggle Connect, allowing companies to consult directly with the top talent.

    ReplyDelete
    Replies
    1. Yes, these open modeling competitions definitely fit into the second group.

      Delete