Monday, April 18, 2011

Cheap Beer, Paradoxical Dice, and the Unfounded Morality of Economists

Sometimes a concept can be so intuitively obvious that it actually becomes more difficult to teach and discuss. Take transitivity. We say that real numbers have the transitive property. That means that if you have three real numbers (A, B and C) and you know A > B and B > C then you also know that A > C.

Transitivity is just too obvious to get your head around. In order to think about a concept you really have to think about its opposite as well --

A > B, B > C and C > A. None too imaginatively, we call these opposite relationships intransitive or non-transitive. Non-transitive relationships are deeply counter-intuitive. We just don't expect the world to work like that. If you like butterscotch shakes better than chocolate shakes and chocolate shakes better than vanilla shakes, you expect to like butterscotch better than vanilla. If you can beat me at chess and I can beat Harry, you assume you can beat Harry. There is. of course, an element of variability here -- Harry might be having a good day or you might be in a vanilla kind of mood -- but on average we expect these relationships to hold.

The only example of a non-transitive relationship most people can think of is the game Rock, Paper, Scissors. Other games with non-transitive elements include the boomer classic Stratego where the highest ranked piece can only be captured by the lowest and my contribution, the game Kruzno which was designed specifically to give students a chance to work with these concepts.

While these games give us a chance to play around with non-transitive relationships, they don't tell us anything about how these relationships might arise in the real world. To answer that question, it's useful to look at another game.

Here are the rules. We have three dice marked as follows:

Die A {2,2,4,4,9,9}

Die B {1,1,6,6,8,8}

Die C {3,3,5,5,7,7}

Because I'm a nice guy, I'm going to let you pick the die you want. I'll then take one of the two remaining dice. We'll roll and whoever gets the higher number wins. Which die should you pick?

The surprising answer is that no matter which one you pick I'll still have the advantage because these are non-transitive dice. A beats B five out of nine times. B beats C five out of nine times. C beats A five out of nine times. The player who chooses second can always have better odds.

The dice example shows that it's possible for systems using random variables to result in non-transitive relationships. Can we still get these relationships in something deterministic like the rules of a control system or perhaps the algorithm a customer might use to decide on a product?

One way of dealing with multiple variables in a decision is to apply a threshold test to one variable while optimizing another. Here's how you might use this approach to decide between two six-packs of beer: if the price difference is a dollar or less, buy the better brand; otherwise pick the cheaper one.* For example, let's say that if beer were free you would rank beers in this order:

1. Sam Adams

2. Tecate

3. Budweiser

If these three beers cost $7.99, $6.99 and $5.99 respectively, you would pick Tecate over Bud, Sam Adams over Tecate and Bud over Sam Adams. In other words, a rock/paper/scissors relationship.

Admittedly, this example is a bit contrived but the idea of a customer having a threshold price is not outlandish, and there are precedents for the idea of a decision process where one variable is ignored as long as it stays within a certain range.

Of course, we haven't established the existence, let alone the prevalence of these relationships in economics but their very possibility raises some interesting questions and implications. Because transitivity is such an intuitively appealing concept, it often works its way unnoticed into the assumptions behind all sorts of arguments. If you've shown A is greater than B and B is greater than C, it's natural not to bother with A and C.

What's worse, as Edward Glaeser has observed, economists tend to be reductionists, and non-transitivity tends to play hell with reductionism. This makes economics particularly dependent on assumptions of transitivity. Take Glaeser's widely-cited proposal for a "moral heart of economics":

Teachers of first-year graduate courses in economic theory, like me, often begin by discussing the assumption that individuals can rank their preferred outcomes. We then propose a measure — a ranking mechanism called a utility function — that follows people’s preferences.

If there were 1,000 outcomes, an equivalent utility function could be defined by giving the most favored outcome a value of 1,000, the second best outcome a value of 999 and so forth. This “utility function” has nothing to do with happiness or self-satisfaction; it’s just a mathematical convenience for ranking people’s choices.

But then we turn to welfare, and that’s where we make our great leap.

Improvements in welfare occur when there are improvements in utility, and those occur only when an individual gets an option that wasn’t previously available. We typically prove that someone’s welfare has increased when the person has an increased set of choices.

When we make that assumption (which is hotly contested by some people, especially psychologists), we essentially assume that the fundamental objective of public policy is to increase freedom of choice.


But if these rankings can be non-transitive, then you run into all sorts of problems with the very idea of a utility function. (It would also seem to raise some interesting questions about revealed preference.) Does that actually change the moral calculus? Perhaps not but it certainly complicates things (what exactly does it mean to improve someone's choices when you don't have a well-ordered set?). More importantly, it raises questions about the other assumptions lurking in the shadows here. What if new options affect the previous ones in some other way? For example, what if the value of new options diminishes as options accumulate?

It's not difficult to argue for the assumption that additional choices bring diminishing returns. After all, the more choices you have, the less likely you are to choose the new one. This would imply that any action that takes choices from someone who has many and gives them to someone has significantly fewer represents a net gain since the choice is more likely to be used by the recipient. Let's say we weight the value of a choice by the likelihood of it being used, and if we further assume that giving someone money increases his or her choices, then taking money from a rich person and giving it to a poor person should produce a net gain in freedom.

Does this mean Glaeser's libertarian argument is secretly socialist? Of course not. The fact that he explicitly cites utility functions suggests that he is talking about a world where orders are well defined, and effects are additive and you can understand the whole by looking at the parts. In that world his argument is perfectly valid.

But as we've just seen with our dice and our beer, we can't always trust even the most intuitively obvious assumptions to hold. What's more, our examples were incredibly simple. The distribution of each die just had three equally probable values. The purchasing algorithm only used two variables and two extremely straightforward rules.

The real world is far more complex. With more variables and more involved rules and relationships, the chances of an assumption catching us off guard only get greater.



*Some economists might object at this point that this algorithm is not rational in the strict economics sense of the word. That's true, but unless those economists are also prepared to argue that all consumers are always completely rational actors, the argument still stands.

1 comment:

  1. Well done. Economics, as it is taught today, is on a very weak mathematical basis. Look at the equilibrium problem in which economists assume a single equilibrium, that the state converges and that the process is not path dependent. It's like calculus before mathematicians figured out why series converge, except with weaker empirical backing. (Mathematicians could at least try catching the tortoise and, like Achilles, sit on its back and argue convergence from there.)

    ReplyDelete