Monday, January 31, 2011

Paul Krugman has no shame

No honorable man would use this title, even if the topic is core inflation. Even Shakespeare would have said this one goes too far.

Student Loans

I came across this article by following one of the commentators in the discussion section on Andrew Gelman's blog (it was the same discussion that Mark was recommending). Protoscholar is discussing financing graduate school and makes a very good point:

The last option, to be avoided if at all possible, is student loans. The U.S. government makes it incredibly easy to take out student loans to pay for school, but these things will haunt you for decades to come. Remember that they are not GIVING you money, they are LOANING it to you. You have to pay it back someday. If a starting assistant professor makes $50,000, it isn’t worthwhile to get the degree and graduate with twice that in loans.

Remember; loans cannot be gotten rid of with bankruptcy and have the potential to ruin your credit, your chance of buying a home, your ability to rent a place and a lot more. Figure out the cost/benefit of taking out loans. What is your starting salary likely to be? Are you stuck working for a university or can you go into private industry for a few years and make a pile of money that can be used to pay the loans off? Is your field notoriously underpaid? Think about these things before accepting the loans or plan on slaving to pay them off for a long, long time to come.


I wish more students would think about issues like this. Even with assistantships and so forth, it is difficult to complete graduate school without acquiring some debt (as life can bring up all sorts of surprises that cost money). Plus, it can be hard to live like a hermit for many years, especially if the school you are going to is in a large city (with the consequent expenses).

But I see a lot of graduate students finish with a surprising amount of debt; even if the degree is very marketable this can be a dangerous way to start life off.

The curious case of Dr. Glaeser

Edward L. Glaeser is smart guy. Despite being in his mid-forties, Glaeser has already made a number of important contributions to economics. He has been a fixture in the Boston Globe and has recently joined the lie-up of the New York Times' Economix blog (slogan: "explaining the science of everyday life."). All of which makes this recent column on population shifts all the more inexplicable.

The article is classic "ice cream causes murder" analysis. In order to get the desired conclusion, you have to ignore numerous confounding factors and discard a number of alternative hypotheses that better explain the data and even then you have to be selective with your metrics and graphs (or at least the labeling) to maintain even the appearance of credibility.

Joseph has already discussed this at some length but just to review, percent change can be tricky to work with, particularly when dealing with state populations which can vary by well over an order of magnitude. Add to this the matter of population density -- certainly a concern when talking about home prices but notably absent from Prof. Glaeser's post.

Let's take Glaeser's comparison of California and Texas. If we look at percent population change as he does, Texas does much better but Texas has more land and fewer people. If we look at absolute change, the results are much closer and if we take density into account with something like change by area, California may actually come out ahead. (Glaeser's use of California is also interesting in other unintended ways, but more on that later.)

If you look at this table from Wikipedia, the role of population density becomes harder to ignore and Glaeser's case becomes harder to buy. Consider these two paragraphs:

"More generally, population isn’t moving to high-income areas. The four fastest-growth states were Nevada, Arizona, Utah and Idaho (in order of growth), all of which have earnings below the national average.

"Our richest states, Connecticut, New Jersey and Massachusetts, grew by 4.9, 4.5 and 3.1 percent, respectively, far below the national average. People are not following the money."

Could available land be playing a role here? Probably. Nevada, Utah and Idaho are among the ten least densely populated states in the union. Arizona is is in the next ten. By comparison, Connecticut, New Jersey and Massachusetts make up three of four most densely populated.

Glaeser's response to this point (in a related post) doesn't really help his case:

"Why is housing supply so generous in Georgia and Texas? It isn’t land. Harris County, Tex., which surrounds Houston, has a higher population density than Westchester County, N.Y."

To people with even a passing familiarity with these two areas, the fact that Harris County is more dense than Westchester County is hardly surprising. What is surprising is that someone would say that Harris 'surrounds Houston.' For most purposes, Harris IS Greater Houston. The city does include two other counties but Harris contains all but a fraction of the metropolitan area's population.

In other words, the county that includes Houston is more dense than a suburb of New York.

But perhaps the most damning rebuttal comes from Glaeser's prime example:

"My interpretation of Red State growth is that Republican states have grown more quickly because building is easier in those states, primarily because of housing regulations. Republican states are less prone to restrict construction than places like California and Massachusetts, and as a result, high-quality housing is much cheaper.

"There is a strange irony in this: more conservative places do a much better job in providing affordable housing for ordinary Americans than progressive states that are believed to care about affordable housing.

"Progressive states, of course, have other objectives beyond affordable housing, and some involve blocking building. California environmentalists have been fighting construction for more than 40 years, and regulations in Massachusetts are barely less intrusive."

According to Glaeser's hypothesis, California should be scraping the bottom, particularly given that, as mentioned before, the metric being used tends to understate the growth of highly populous states. And keep in mind that he previously used Oklahoma as one of his examples of Red State growth. Oklahoma's growth rate was 8.7%. California's was 10%. That puts California slightly above the national average. (You'll notice that, unlike the other states discussed, California's actual growth rate does not appear in the article, nor is it labeled in any of the graphs.) California also beat a number of McCain states in the region of Oklahoma and, being a good ol' Arkansas boy myself, I can tell you that if a lack of building codes and environmental regulations were driving immigration, the whole area would be packed.

[To get a good picture of what's going on here, take a few minutes to look at this helpful interactive map. Pay close attention to the red states in the center of the country like Missouri.]

Population density alone does a good job explaining population shift patterns (considerably better than Glaeser's hypothesis). Add in the growth of the Hispanic population (something any competent demographic analysis should include) and it does an excellent job. And when you lay on top of that the expected impact of Katrina (how much of Texas' growth was diverted from the anemic Louisiana?) and the Nevada real estate bubble, the map we see looks almost exactly like the map we would expect.

If you don't want to use population density and the growth of the Hispanic population, how about the graying of America? Traditional retirement migration patterns look a lot like what we're seeing here and I'll bet we can come up with a few more explanations that outperform Glaeser's hypothesis.

George Polya once observed* that, when given a theory, scientists and mathematicians tended to look for cases that contradicted the theory while almost everyone else looked for cases that confirmed it (others had made similar points before he did, but I'm a Polya fan). Glaeser's approach here falls into the 'everybody else' category. He went looking for confirmation and he found it, but his theory maintains the appearance of validity only as long no one looks for contradictory evidence.

Perhaps Economix needs a new slogan.



*Quoting from memory so I may have to revise this later.

When bad things happen to good technologies

Roger Ebert has a follow-up to the letter he published from Walter Murch on the various reasons why 3D remains a bad idea (previously discussed here). This time the subject is Maxivision48, a technology that, according to Ebert, "produced a picture four times as good as conventional film."

It's an interesting counterpoint, an innovation that has been getting great reviews from independent experts for a dozen years but can't seem to get any attention from the industry despite what seems to be a sound business case.

Sunday, January 30, 2011

Better late than never...

Via Andrew Gelman, here's an excellent post that made many of the same points that we've been making about lottery-based analyses here at OE (and beat us to the punch by several months).

Mini-colloquium: Homework

One of the cool things about about Andrew Gelman's blog is the quality of discussions you'll see in his comment section. The observations are almost always well-informed and the tone strikes a nice balance between spirited and civil.

Prof. Gelman's recent post on homework (which responded to this post we did earlier on OE) prompted one of these excellent discussions. Joseph has already built a post around one particularly interesting comment, but all of them are worth reading.

Hotel pricing enigma

(that may not be that enigmatic)

Has anyone else noticed that the higher the price of the hotel, the more likely it is to charge for internet access? I do quite a bit of travelling on my own decidedly thin dime so I usually stay in one of the bargain chains but occasionally I find myself in a nice, corporate-booked hotel. Free internet is almost universal in the cheap places but it's likely to run you $10 a night in a place with valet parking and a fitness center.

I assume this is another case of prices being driven by expense accounts, but does anyone else have an alternative explanation?

Saturday, January 29, 2011

Educational Lotteries

Hoisted from the comments on Andrew Gelman's web site, Steve Sailor claims:

In my one experience with a charter school lottery, the charter school made it quite difficult to enter the lottery; and to find out if your kid was selected, you had to go there and ask them. And, it appeared, it wasn't a random lottery at all. My son was known to the founders of the school as a star student, so when I showed up and nervously asked if his name had been picked, I was told, Don't worry about it, of course he's in.


This is a very good point. There have been some very good comments on the use of charter school lotteries as a gold standard. I think there are persuasive reasons to be concerned, even if the lotteries are fair. However, it would be in the self interest of charter schools to accept a few "star students" outright rather than lave their selection to chance. Insofar as this happens at all, we would no longer have randomization (and thus would be unable to use these lotteries to estimate causal effects, even if the other concerns were not present).

So it seems increasingly hard to treatment there lotteries as a valid instrumental variable.

Weekend Gaming -- The Blue and the Gray

[If you missed it, check out last week's game as well.]

The famous game designer,* Sid Sackson, had over eighteen thousand games in his personal collection so making his short list was quite an accomplishment, particularly for a game that almost nobody has ever heard of.

On this alone, the Blue and the Gray would be worth a look, but the game also has a number of other points to recommend it: it only takes about three minutes to learn (you can find a complete set of rules here); it is, as far as I know, unique; it raises a number of interesting and unusual strategic questions; and for the educators out there, its Turn-of-the-Century** origins provide some opportunities for teaching across the curriculum. My only major complaint is that it requires a dedicated board, but making your own shouldn't take more than a few minutes.

The object of the game is to be the first to get your general to the center by moving along the designated path while using your soldiers to block your opponent's progress. Since soldiers can capture each other, the game has two offensive options (capturing and advancing) compared to one defensive option (blocking). (Something I learned from developing my own game was the more the designer can shift the focus to offense, the better and faster the game will play.)

I don't know of any attempt to do a serious analysis of the Blue and the Gray. Might be fun to look into. If someone out there finds anything interesting, make sure to let us know.


* Yes, I did just use the phrase, 'famous game designer.'

** I'm going off memory here about the age of the game. You should probably double check before building a lesson plan around this. (see update)

UPDATE:

Via the good people at the University of Maryland, here's the original patent from 1903.

Thursday, January 27, 2011

A new post by James Kwak, an old rant by me

James Kwak has an excellent post on rational maximizing, but while the whole thing is worth reading, this passage jumped out at me:
I say this is interesting because, on the one hand, I accept the random walk studies (and I personally believe I have no ability to predict where any security price is going tomorrow), but on the other hand I think that any idea that markets have fundamental levels is flawed. For example, housing prices are still falling. Some people try to predict how far they will fall by looking at the Case-Shiller Index and figuring out where the long-term trend line will be. But how do you look at a chart and figure out what the right value is? What if there has been something different about the market over the last one hundred years from the market today? It’s really a fool’s errand.
Kwak's objection reminded me of a similar problem I had with the book, "Lifecyle Investing," by Ian Ayres and Barry Nalebuff, one that finally made my head explode:
No one can possibly know what's going on here! We can get some smart people making good guesses about long term stock performance, but these guesses are based on data from a century's worth of secular upheavals. A list that includes the Great Depression, two world wars, a post-war reconstruction, the cold war, China becoming a major player, boomers entering the market, boomers leaving the market and huge changes in regulation, technology and business practices.

What's happening now never happened before. What happened before never happened before. What happened before what happened before never happened before. We have no precedents. People are recommending forty-year investment strategies using models based on data from markets that haven't gone twenty years without a major secular change.
I have great respect for economists, but, more so than any other field I can think of, they are shooting at moving targets and sometimes I get a bit nervous about their aim.

One more exchange on Mankiw's assumptions

More from the ongoing debate.

Here's David (pulled from a longer comment):
I think where we disagree (assuming that we do disagree) is on where the burden of proof should lie. As an economist, and based on my reading of the theoretical and empirical literatures, the burden is on the individual who claims there are important plateaus and such. This requires showing empirically that they exist, and not in a general sense, but on the relevant margin of choice for those individuals. My general sense is that most economists would agree with this placing of the burden of proof, and your suggestion of the consensus of various economists is consistent with my impression as well. In other words, to assume that there are important plateaus on the margin requires empirical justification, and substantial justification because its very difficult to understand labor markets if we deviate generally even moderately from this productivity/wages relationship. So while you agree that “…if pundits' arguments are sufficiently robust or their assumptions are obviously true, they can do what Mankiw does.” I’d say that the consensus to me amongst economists supports the arguments and broader type of assumptions that I discussed previously. I suppose that’s an empirical question, for which I have not yet looked for data.

David,

It's easy to get lost in the weeds here, so I'll try to get a few specific points out of the way then address the bigger issue of of the way we treat assumptions in the economic debate.

First, when it comes to robustness, it is sufficient to show that deviating from an assumption would cause the model to fail. There is no need to show that a particular deviation (such as the possible plateaus I suggested) occurs, only that if it occurs problems will follow. The world is full of perfectly good models that are not robust. As long as the real world lines up closely enough with the model's assumptions, the lack of robustness is not an issue.

Robustness is, however, an issue when we go out of the range of data, and, given these unique times, every policy proposal goes outside our range of data. At this point the burden falls on the proposer to be explicit with assumptions and make some kind of case that they are being met.

We also need to be carefully to distinguish between individual and aggregate relationships. We know that raises and promotions occur at discrete points and bonuses are frequently capped. That means, for many workers, the relationship between wages and productivity can't be linear. It is, however, possible that when aggregated that relationship is linear (or at least close enough for our purposes). The problem here is that proposals that assume individual level linearity can sound a lot like proposal that assumes aggregate linearity. Once again, we need more caution and clarity than we've been seeing.

All of which lead to the main point: much of the economic debate (particularly Greg Mankiw's corner of it) has been based on arguments that aren't all that robust and assumptions that aren't immediately self-evident. Many of these arguments reach conclusions that are difficult to reconcile with the historical record (such as Mankiw's prediction that a return to Clinton era taxes would have dire effects on the nation). Under these circumstances, assumptions should not be left implicit and they certainly should not be depicted as broad and obvious when they are highly specialized and non-intuitive (Freakonomics being the best known example with Levitt's go-to "people respond to incentives." formulation).

In other words, in this situation, I'd probably argue that the burden of proof is on Mankiw; I'd certainly insist the burden of clarity is.

Wednesday, January 26, 2011

Evaluating the evaluations

Busy morning so I don't have time to do more than provide some links and the abstract for this paper on the effectiveness of college teachers.
In primary and secondary education, measures of teacher quality are often based on contemporaneous student performance on standard-ized achievement tests. In the postsecondary environment, scores on student evaluations of professors are typically used to measure teaching quality. We possess unique data that allow us to measure relative student performance in mandatory follow-on classes. We compare metrics that capture these three different notions of instructional quality and present evidence that professors who excel at promoting contemporaneous student achievement teach in ways that improve their student evaluations but harm the follow-on achievement of their students in more advanced classes.
Here's the ungated version via Tyler Cowen. May not be quite the same as the published one.

Here's Andrew Gelman's reaction.

Repost -- Fitness Landscapes, Ozark Style

[I'm working on a long post that uses fitness landscapes, so I thought I'd rerun some previous posts to get the conversation going.]

I grew up with a mountain in my backyard... literally. It wasn't that big (here in California we'd call it a hill) but back in the Ozarks it was a legitimate mountain and we owned about ten acres of it. Not the most usable of land but a lovely sight.

That Ozark terrain is also a great example of a fitness landscape because, depending on which side you look at, it illustrates the two serious challenges for optimization algorithms. Think about a mountainous area at least partially carved out by streams and rivers. Now remove all of the rocks, water and vegetation drop a blindfolded man somewhere in the middle, lost but equipped with a walking stick and a cell phone that can get a signal if he can get to a point with a clear line of sight to a cell tower.

With the use of his walking stick, the man has a reach of about six feet so he feels around in a circle, finds the highest point, takes two paces that direction then repeats the process (in other words, performs a gradient search). He quickly reaches a high point. That's the good news; the bad news is that he hasn't reached one of the five or six peaks that rise above the terrain. Instead, he has found the top of one of the countless hills and small mountains in the area.

Realizing the futility of repeating this process, the man remembers that an engineer friend (who was more accustomed to thinking in terms of landscape minima) suggested that if they became separated he should go to the lowest point in the area so the friend would know where to look for him. The man follows his friend's advice only to run into the opposite problem. This time his process is likely to lead to his desired destination (if he crosses the bed of a stream or a creek he's pretty much set) but it's going to be a long trip (waterways have a tendency to meander).

And there you have the two great curses of the gradient searcher, numerous small local optima and long, circuitous paths. This particular combination -- multiple maxima and a single minimum associated with indirect search paths -- is typical of fluvial geomorphology and isn't something you'd generally expect to see in other areas, but the general problems of local optima and slow convergence show up all the time.

There are, fortunately, a few things we can do that might make the situation better (not what you'd call realistic things but we aren't exactly going for verisimilitude here). We could tilt the landscape a little or slightly bend or stretch or twist it, maybe add some ridges to some patches to give it that stylish corduroy look. (in other words, we could perturb the landscape.)

Hopefully, these changes shouldn't have much effect on the size and position of the of the major optima,* but they could have a big effect on the search behavior, changing the likelihood of ending up on a particular optima and the average time to optimize. That's the reason we perturb landscapes; we're hoping for something that will give us a better optima in a reasonable time. Of course, we have no way of knowing if our bending and twisting will make things better (it could just as easily make them worse), but if we do get good results from our search of the new landscape, we should get similar results from the corresponding point on the old landscape.

In the next post in the series, I'll try to make the jump from mountain climbing to planning randomized trials.

* I showed this post to an engineer who strongly suggested I add two caveats here. First, we are working under the assumption that the major optima are large relative to the changes produced by the perturbation. Second our interest in each optima is based on its size, not whether it is global. Going back to our original example, let's say that the largest peak on our original landscape was 1,005 feet tall and the second largest was 1,000 feet even but after perturbation their heights were reversed. If we were interested in finding the global max, this would be be a big deal, but to us the difference between the two landscapes is trivial.

These assumptions will be easier to justify when start applying these concepts in the next post in the series. For now, though, just be warned that these are big assumptions that can't be made that often.

Tuesday, January 25, 2011

They may be anecdotal...

...but as recent events drive the mental health debate, cases like this take on an added significance.

"Why 3D doesn't work and never will. Case closed."

According to Roger Ebert (who should know), Walter Murch is "the most respected film editor and sound designer in the modern cinema" and, according to Murch, 3-D movies are still a bad technology.

Here's his main objection (from an open letter to Ebert):

The biggest problem with 3D, though, is the "convergence/focus" issue. A couple of the other issues -- darkness and "smallness" -- are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen -- say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution* has never presented this problem before. All living things with eyes have always focused and converged at the same point.

If we look at the salt shaker on the table, close to us, we focus at six feet and our eyeballs converge (tilt in) at six feet. Imagine the base of a triangle between your eyes and the apex of the triangle resting on the thing you are looking at. But then look out the window and you focus at sixty feet and converge also at sixty feet. That imaginary triangle has now "opened up" so that your lines of sight are almost -- almost -- parallel to each other.

We can do this. 3D films would not work if we couldn't. But it is like tapping your head and rubbing your stomach at the same time, difficult. So the "CPU" of our perceptual brain has to work extra hard, which is why after 20 minutes or so many people get headaches. They are doing something that 600 million years of evolution never prepared them for. This is a deep problem, which no amount of technical tweaking can fix. Nothing will fix it short of producing true "holographic" images.

Murch also makes important points about the editing and aesthetics of 3-D cinema, none of which are likely to make you rush out and invest your money in the technology, but that's just what the film industry has been doing.

As far as I can tell, it's been over seventy years since a customer-facing innovation (Technicolor) has revolutionized the cinema industry (distinguished here from home entertainment where the story has been entirely different). There have been customer-facing innovations but they've failed to catch on (Cinerama, color-based 3-D, Sensurround -- Imax has managed to stick around, but with less than 500 theaters after about four decades, it hasn't really been a game changer).

The innovations that actually had a major impact on the industry have been primarily focused on making films faster and quicker to make and easier to market (multiplexes, 'opening big,' digital production, post-production and projection, even CGI).

And yet studio executives continue to dream of the next Vitaphone.






*I'm not sure about the 600 million years -- how far back does stereoscopic vision go?