Monday, December 31, 2012

Bad grading idea

Via Tyler Cowen:
 I will submit your papers (blind) to external referees as well as myself for assessment, an A grade will be limited to those papers, and only those papers, that are recommended for acceptance or conditional acceptance, a B grade will be assigned to those papers that receive a recommendation of revise and resubmit, and a C grade will be assigned to those papers that are rejected by the external referees and myself.
I would be quite annoyed to discover that I was putting in the hours to evalaute a paper only to discover that I was doing a professor's job of grading said paper.  Furthermore, it seems that the editor is also the professor for the course.  I would be reluctant to evaluate a student or peer at tmy institution.  The less distance, the more I would be reluctant to do so.  The professor in queestion is willing to blind the papers for the external reviewers, who can not possibly be as potentially biased as a professor with their own students. 

I am also wondering about the standards of a journal in which revise and resubmit is a B grade.  There cannot be many A's.  I have (once) had a paper accepted without revisions but it was definitely not the first time it was ever sent to a journal.  The idea that a paper done in a single semester course (in parallel with other classes) would be a paper so high quality that it was accepted without revisions less than 4 months of work would be incredible in Epidemiology. 

Sunday, December 30, 2012

Back when fifty years was a long time ago

I've noticed over the past year or two that people ranging  from Neal Stephenson  to Paul Krugman have been increasingly open about the possibility that technological progress been under-performing lately (Tyler Cowen has also been making similar points for a while). David Graeber does perhaps the bst job summing up the position (though I could do without the title).

The case that recent progress has been anemic is often backed with comparisons to the advances of the late Nineteenth and early Twentieth Centuries (for example).  There are all sorts of technological and economic metrics that show the extent of these advances but you can also get some interesting insights looking at the way pop culture portrayed these changes.

Though much has been written pop culture attitudes toward technological change, almost all focus on forward-looking attitudes (what people thought the future would be like). This is problematic since science fiction authors routinely mix the serious with the fanciful, satiric and even the deliberately absurd. You may well get a better read by looking at how people in the middle of the Twentieth Century looked at their own recent progress.

In the middle of the century, particularly in the Forties, there was a great fascination with the Gay Nineties. It was a period in living memory and yet in many ways it seemed incredibly distant, socially, politically, economically, artistically and most of all, technologically. In 1945, much, if not most day-to-day life depended on devices and media that were either relatively new in 1890 or were yet to be invented.  Even relatively old tech like newspapers were radically different, employing advances in printing and photography and filled with Twentieth Century innovations like comic strips.

The Nineties genre was built around the audiences' self-awareness of how rapidly their world had changed and was changing. The world of these films was pleasantly alien, separated from the viewers by cataclysmic changes.

The comparison to Mad Men is useful. We have seen an uptick in interest in the world of fifty years ago but it's much smaller than the mid-Twentieth Century fascination with the Nineties and, more importantly, shows like Mad Men, Pan Am and the Playboy Club focused almost entirely on social mores. None of them had the sense of travelling to an alien place that you often get from Gay Nineties stories.

There was even a subgenre built around that idea, travelling literally or figuratively to the world of the Nineties. Literal travel could be via magic or even less convincing devices.

(technically 1961 is a little late for this discussion, but you get the idea)

Figurative travel involved going to towns that had for some reason abandoned the Twentieth Century. Here's a representative 1946 example from Golden Age artist Klaus Nordling:

There are numerous other comics examples from the Forties, including this from one of the true geniuses of the medium, Jack Cole.

More on Glaeser and Houston

From the comment section to my recent post on Edward Glaeser (from a reader who was NOT happy with my piece), here's a quote from Glaeser that does put Houston where it belongs:
If our Houston family's income is lower, however, its housing costs are much lower. In 2006, residents of Harris County, the 4-million-person area that includes Houston, told the census that the average owner-occupied housing unit was worth $126,000. Residents valued about 80% of the homes in the county at less than $200,000. The National Association of Realtors gives $150,000 as the median price of recent Houston home sales; though NAR figures don't always accurately reflect average home prices, they do capture the prices of newer, often higher-quality, housing.
Just to review, I had criticized Glaeser for this quote:
Why is housing supply so generous in Georgia and Texas? It isn’t land. Harris County, Tex., which surrounds Houston, has a higher population density than Westchester County, N.Y.
So should we write off the "surrounds" as a bad choice of words and move on to another topic? Not quite yet. The trouble is that the placement of Houston doesn't just shift; it shifts in a way that's necessary for Glaeser's argument to work. If Houston weren't in Harris then the Westchester comparison would make sense. Here are the population densities (all numbers courtesy of Wikipedia)

Harris             2,367/sq mi
Westchester    2,193/sq mi

But Harris does include Houston. None of the counties that contain parts of NYC have a density lower than Harris (not even close). Of course, the county to county comparisons are problematic, but we get the same results from the much cleaner city-to-city comparison.

Houston              3,623/sq mi
New York City   27,012.5/sq mi

We could go back and forth on the best way to slice this data, but this is a big difference to get around. This doesn't mean that population density is driving the difference in housing between New York and Houston or regulation isn't the main driver here.

But the Harris/Westchester example Glaeser used to prove his point was badly chosen and it's worrisome that neither he, the New York Times or the vast majority of the blogosphere picked up on that.

Saturday, December 29, 2012

Glaeser... Glaeser... Where have I heard that name before?

Joseph's last post has got me thinking that it might be a good time for a quick Edward Glaeser retrospective.

Glaeser is, unquestionably, a smart guy with a lot of interesting ideas. Unfortunately, those ideas come with a heavy dose of confirmation bias, a bias made worse by a strong tendency to see the world through a conservative/libertarian filter, a provincial attitude toward much of the country and a less than diligent approach to data. The result is often some truly bizarre bits of punditry.

The provincialism and cavalier approach are notably on display in this piece of analysis involving Houston, a city Glaeser has written about extensively.
Why is housing supply so generous in Georgia and Texas? It isn’t land. Harris County, Tex., which surrounds Houston, has a higher population density than Westchester County, N.Y.
The trouble is Houston is IN Harris County (technically, the town does spill over into a couple of other counties -- Texas counties are on the small side -- but it's mainly Harris).

Keep in mind that Glaeser is one of the leading authorities on cities and Houston is one of his favorite examples.

[Update: Glaeser has correctly placed Houston is Harris County in the past, though the Harris/Westchester comparison still raises questions.]

Glaeser's flawed example was part of a larger argument that "Red State growth is that Republican states have grown more quickly because building is easier in those states, primarily because of housing regulations. Republican states are less prone to restrict construction than places like California and Massachusetts, and as a result, high-quality housing is much cheaper."

Like so much of Glaeser, it's an interesting idea with some important implications but as presented it doesn't really fit the data.

This confirmation bias can lead to some other truly strange examples:
But there was a crucial difference between Seattle and Detroit. Unlike Ford and General Motors, Boeing employed highly educated workers. Almost since its inception, Seattle has been committed to education and has benefited from the University of Washington, which is based there. Skills are the source of Seattle’s strength.
The University of Michigan is essentially in a suburb of Detroit. UM and UW are both major schools with similar standings. Washington is better in some areas, Michigan is better in others, but overall they are remarkably close. When you add in Wayne State (another fine school), the argument that Seattle is doing better than Detroit because of the respective quality of their universities is, well, strange.

Glaeser's confirmation bias has led him to make a number of other easily refuted arguments. His predictions about the auto bailout aren't looking good. Joseph pointed out numerous problems with his statements about food stamps. Dominik LukeŇ° demolished his school/restaurant analogy. His claims about Spain's meltdown are simply factually wrong.

To make matters worse, Glaeser doesn't seem to show much interest in engaging his critics. As far as I know, none of these points have ever been addressed.

Friday, December 28, 2012

Disability and work

Via Brad Delong we have this quote from Eschaton:

It's an economist way to think about things, that someone being in the labor force means they're choosing the "work option." But in recession the options for some are no work and no money (otherwise known as "homelessness") or managing to qualify for disability (average monthly payment about $1100, max benefit about $2500). If you have some form of disability, you might be able to work if you have a job and employer that can accommodate you, but lose that job and you're probably going to be out of luck.

This isn't really mysterious stuff. Someone is 61, has a moderate disability, and loses his/her job. There is no work option.
This was in response to an article by Edward Glaeser puzzling over why the social security disability roles are suddenly rising, including such gems as

Ultimately, the best recipe for fighting poverty is investment in human capital. This starts with improving our education system, an undertaking that should include experiments with digital learning, incentives for attracting good teachers and retooling community colleges so they provide marketable skills to less-advantaged Americans.

The steady rise in disability claims presents something of a puzzle. Medicine has improved substantially. Far fewer of us labor in dangerous industrial jobs like the ones that originally motivated disability insurance. The rate of deaths due to injuries has plummeted. Behavior that can cause disability, such as alcohol use and smoking, has declined substantially. American age-adjusted mortality rates are far lower than in the past.
Has Professor Glaeser noted tuition costs lately?  Or looked at the consequences of defaulting on student loans?  The opportunity cost for a semi-disabled worker to go back to school, go wildly into debt and then risk having social security garnished to pay wages is very high. 

The real issue is unemployment.  That is the root source of declining skills and it is reasonable that firms may be less likely to hire disabled workers in a recession.  Add in the lack of mobility that we have added with the housing crash and this is not at all mysterious why people might have trouble breaking back into the work force once they are delayed. 

As for reduced mortality, sometimes that goes the other direction.  Saving a person from a heart attack (who would previously have died) may result in a person with lower cardiac function and with less endurance than before.  In a full employment scenario they can fight to stay in the workforce because there is a shortage of good people.  But in a massive recession they can no longer compete and disability is actually a correct description of why they cannot compete. 

Thinking about the math of board games over at You Do the Math

Thursday, December 27, 2012

I'll be talking about the future in the future

And I will definitely be discussing this article by David Graeber and this one by Robert J. Gordon. I have with both (particularly the Gordon piece) but they are still good, thought-provoking and important. Read'em both and we'll talk about it later.

A poverty comment

There were wo articles that led to me agreeing with Matt Yglesias about refocusing on cash transfers and wporrying less about how the money is spent.  One, was an expose on how the state of Georgia tries to keep people from receiving benefits.  Now it is fair pool to decide that, as a state or a society, that you don't want to give these benefits.  But doesn't it make sense to make that decision transparently instead of slowing adding in extra steps?

The other one was a link I found on Felix Salmon's webste called "Stop subsizing obesity".  What made this article surreal was that we did not get a discussion of ways to change agricultural subsidies to make high fructose corn syrup a less prevalent addition.  No, what we go was an attack on food stamps:
This could happen in two ways: first, remove the subsidy for sugar-sweetened beverages, since no one without a share in the profits can argue that the substance plays a constructive role in any diet. “There’s no rationale for continuing to subsidize them through SNAP benefits,” says Ludwig, “with the level of science we have linking their consumption to obesity, diabetes and heart disease.” New York City proposed a pilot program that would do precisely this back in 2011; it was rejected by the Department of Agriculture (USDA) as “too complex.”
Now, of all the targets they could go after, soda is the one that I am the most sympathetic to.  I have bene trying (with mixed success) to radically reduce my own consumption of the substance.  But look at the sorts of ideas that come out next:

Simultaneously, make it easier to buy real food; several cities, including New York, have programs that double the value of food stamps when used for purchases at farmers markets. The next step is to similarly increase the spending power of food stamps when they’re used to buy fruits, vegetables, legumes and whole grains, not just in farmers markets but in supermarkets – indeed, everywhere people buy food.
Can we say the word "arbitrage opportunity"?  Maybe it would be inefficient, but look at how much more complex you are making a very basic program in order to reach a social goal.  And why are we targeting it at food stamp recipients?  If this is a worthwhile way to combat obesity, why not use a vice tax (which is an approach I can at least conceptually support).

So these types of issues are what are making me think maybe we should be much more basic in our approach to charity. 

The difficulties in talking about TV viewership.

In response to this post on television's surprising longevity, Andrew Gelman pointed out that ratings really have plummeted:
[T]he other day I happened to notice a feature in the newspaper giving the ratings of the top TV shows, and . . . I was stunned (although, in retrospect, maybe I shouldn't've been) by how low they were. Back when I was a kid there was a column in the newspaper giving the TV ratings every week, and I remember lots of prime-time shows getting ratings in the 20's. Nowadays a top show can have a rating of less than 5.
Undoubtedly, there has been a big drop here (as you would expect given that broadcast television used to have an effective monopoly over home entertainment), but has the drop been as big as it looks? There are a few mitigating factors, particularly if we think about total viewership for each episode (or even each minute) of a show and the economics of non-rival goods:

1. 52 weeks a year
It took years for the networks to catch on to the potential of the rerun. You'll see this credited to the practice  of broadcasting live, but the timelines don't match up. Long seasons continued until well into the Sixties and summer replacement shows into the Seventies. With the advent of reruns, the big three networks started selling the same shows twice whereas before the viewers for the first time an episode aired was often all the viewers it would ever have. Should we be talking about the number of people who watched a particular airing or should we consider the total number of people who saw an episode over all its airings?

2. The big three... four... five... five and a half...
Speaking of the big three, when we talk about declining ratings, we need to take into account that the network pie is now sliced more ways with the addition of Fox, CW, Ion, MyNetwork and possibly one or two I'm forgetting.

3. But if you had cable you could be watching NCIS and the Big Bang Theory
A great deal of cable programming is recycled network programming. If we count viewership as the total number of times a program is viewed (a defensible if not ideal metric), you could actually argue that the number is trending up for shows produced for and originally shown on the networks.

4. When Netflix is actually working...
Much has been made of on-line providers as a threat to the networks, but much of their business model current relies streaming old network shows. This adds to our total views tally. (Attempts at moving away from this recycling model are, at best, preceding slowly.)

5.  I'm waiting for the Blu-ray
Finally, the viewership and revenue from network shows has been significantly enhanced by DVDs and Blu-rays

I don't want to make too much of this. Network television does face real challenges, cable has become a major source of programming (including personal favorites like Justified, Burn Notice and the Closer), and web series are starting to show considerable promise. The standard twilight-of-the-networks narrative may turn out to be right this time. I'm just saying that, given resilience if the institutions and the complexities of thinking about non-rival goods, I'd be careful about embracing any narrative too quickly.

Wednesday, December 26, 2012

The difference between fantasy and reality

There are no real surprises in this ABC News story (via Digby), nothing that common sense couldn't tell you, but given recent statements by the NRA and its allies, this is an excellent reminder of the huge gap between the action-hero fantasy and the reality of these situations.

Police officers and military personnel are selected for having suitable skills and personality, trained extensively and continuously and re-evaluated on a regular basis and yet even they avoid these scenarios whenever possible (and occasionally end up shooting themselves or innocent bystanders when the situations are unavoidable)..

While there are exceptions, the odds of a civilian with a concealed weapon actually helping are extraordinarily small.

Most of us fantasize about being able to do what an Army Ranger or a SWAT team member can do. There's nothing wrong with fantasizing or even with acting out those fantasies with cardboard targets on a shooting range.

The trouble is, as our gun culture has grown more fantasy based, the people like Wayne LaPierre have increasingly lost the ability to distinguish between real life and something they saw in a movie.

Boxing Day Brain Teasers

Here are a couple to ponder. I've got answers (as well as the inevitable pedagogical discussions) over at You Do the Math.

1. If a perfect square isn't divisible by three, then it's always one more than a multiple off three, never one less. Can you see why?

2. Given the following

A       B                 D _______________________________________________
                  C                   E          F        G

Where does the 'H' go?


I am quite in agreement with this sentiment:
Obviously there's a risk that some of the money will be "wasted" on booze or tobacco but in practice that looks like much less wastage than the guaranteed waste involved in a high-overhead prescriptive charity.
It seems that the cost of "targeting charity" are actually quite high.  In general, we don't like to prescribe how people spend money from other sources.  I am not sure that it really makes sense to do so in the case of poverty, either, given the surprisingly large costs required to ensure that the aid is spent precisely the way that the giver intended.  Now, it is true that earned income would be even better but the only way for a government to accomplish that would be with either monetary policy or a jobs program.

Neither of these seems to be on the table at the moment.

Tuesday, December 25, 2012

Merry Christmas from Little Nemo

And make sure to drive safely.

Sunday, December 23, 2012

As American as Wyatt Earp

As I mentioned before, today's gun culture is radically different from the one I grew up with. On a related note, the gun rights movement, while often presented as conservative (pushing back against liberal advances) or reactionary (wanting to return to the standards of the past), is actually radical (advocating a move to a state that never existed). The idea that people have an absolute right to carry a weapon anywhere, at any time and in any fashion was never the norm, not even in the period that forms the basis for so much of the personal mythology of the gun rights movement.

UCLA professor of law, Adam Winkler

Guns were obviously widespread on the frontier. Out in the untamed wilderness, you needed a gun to be safe from bandits, natives, and wildlife. In the cities and towns of the West, however, the law often prohibited people from toting their guns around. A visitor arriving in Wichita, Kansas in 1873, the heart of the Wild West era, would have seen signs declaring, "Leave Your Revolvers At Police Headquarters, and Get a Check."

A check? That's right. When you entered a frontier town, you were legally required to leave your guns at the stables on the outskirts of town or drop them off with the sheriff, who would give you a token in exchange. You checked your guns then like you'd check your overcoat today at a Boston restaurant in winter. Visitors were welcome, but their guns were not.

In my new book, Gunfight: The Battle over the Right to Bear Arms in America, there's a photograph taken in Dodge City in 1879. Everything looks exactly as you'd imagine: wide, dusty road; clapboard and brick buildings; horse ties in front of the saloon. Yet right in the middle of the street is something you'd never expect. There's a huge wooden billboard announcing, "The Carrying of Firearms Strictly Prohibited."

While people were allowed to have guns at home for self-protection, frontier towns usually barred anyone but law enforcement from carrying guns in public.

When Dodge City residents organized their municipal government, do you know what the very first law they passed was? A gun control law. They declared that "any person or persons found carrying concealed weapons in the city of Dodge or violating the laws of the State shall be dealt with according to law." Many frontier towns, including Tombstone, Arizona--the site of the infamous "Shootout at the OK Corral"--also barred the carrying of guns openly.

Today in Tombstone, you don't even need a permit to carry around a firearm. Gun rights advocates are pushing lawmakers in state after state to do away with nearly all limits on the ability of people to have guns in public.

Like any law regulating things that are small and easy to conceal, the gun control of the Wild West wasn't always perfectly enforced. But statistics show that, next to drunk and disorderly conduct, the most common cause of arrest was illegally carrying a firearm. Sheriffs and marshals took gun control seriously.
These facts aren't contested. They aren't obscure. You can even find them in classic Westerns like Winchester '73.

In 1876, Lin McAdam (James Stewart) and friend 'High-Spade' Frankie Wilson (Millard Mitchell) pursue outlaw 'Dutch Henry' Brown (Stephen McNally) into Dodge City, Kansas. They arrive just in time to see a man forcing a saloon-hall girl named Lola (Shelley Winters) onto the stage leaving town. Once the man reveals himself to be Sheriff Wyatt Earp (Will Geer) Lin backs down. Earp informs the two men that firearms are not allowed in town and they must check them in with Earp's brother Virgil.
In other words, even people who learned their history from old movies should know there's something extreme going on.

Traditional vs. current gun culture

Both Joseph and I come from parts of the world (Northern Ontario and the lower Ozarks, respectively) where guns played a large part in the culture. Hunting and fishing was big. This was mainly for sport, though there were families that significantly supplemented their diet with game and most of the rest of us had family members who remembered living off the land.

We also have a different take on guns for defense. When a call to the police won't bring help within forty-five minutes (often more than that where Joseph grew up), a shotgun under the bed starts sounding much more sensible.

Guns have never been a big part of my life, but I'm comfortable with them. I know what it's like to use a rifle, a shotgun and a revolver. I don't get any special emotional thrill from firing a gun but I do appreciate the satisfaction of knocking a can off of a post.

I think this perspective is important in the debate for a couple of reasons: first, because many discussions on the left often get conflated with impressions and prejudices about rural America and the South and second, (and I think this is the bigger issue) because these traditional ideas are becoming increasingly marginalized in the gun rights movement.

Gun culture has changed radically since the Eighties, as this TPM reader explains
Most of the men and children (of both sexes) I met were interested in hunting, too. Almost exclusively, they used traditional hunting rifles: bolt-actions, mostly, but also a smattering of pump-action, lever-action, and (thanks primarily to Browning) semi-automatic hunting rifles. They talked about gun ownership primarily as a function of hunting; the idea of “self-defense,” while always an operative concern, never seemed to be of paramount importance. It was a factor in gun ownership - and for some sizeable minority of gun owners, it was of outsized (or of decisive) importance - but it wasn’t the factor. The folks I interacted with as a pre-adolescent and - less so - as a teen owned guns because their fathers had owned guns before them; because they’d grown up hunting and shooting; and because - for most of them - it was an experience (and a connection) that they wanted to pass on to their sons and daughters.

And that’s my point: I can’t remember seeing a semi-automatic weapon of any kind at a shooting range until the mid-1980’s. Even through the early-1990’s, I don’t remember the idea of “personal defense” being a decisive factor in gun ownership. The reverse is true today: I have college-educated friends - all of whom, interestingly, came to guns in their adult lives - for whom gun ownership is unquestionably (and irreducibly) an issue of personal defense. For whom the semi-automatic rifle or pistol - with its matte-black finish, laser site, flashlight mount, and other “tactical” accoutrements - effectively circumscribe what’s meant by the word “gun.” At least one of these friends has what some folks - e.g., my fiancee, along with most of my non-gun-owning friends - might regard as an obsessive fixation on guns; a kind of paraphilia that (in its appetite for all things tactical) seems not a little bit creepy. Not “creepy” in the sense that he’s a ticking time bomb; “creepy” in the sense of…alternate reality. Let’s call it “tactical reality.”

The “tactical” turn is what I want to flag here. It has what I take to be a very specific use-case, but it’s used - liberally - by gun owners outside of the military, outside of law enforcement, outside (if you’ll indulge me) of any conceivable reality-based community: these folks talk in terms of “tactical” weapons, “tactical” scenarios, “tactical applications,” and so on. It’s the lingua franca of gun shops, gun ranges, gun forums, and gun-oriented Youtube videos. (My god, you should see what’s out there on You Tube!) Which begs my question: in precisely which “tactical” scenarios do all of these lunatics imagine that they’re going to use their matte-black, suppressor-fitted, flashlight-ready tactical weapons? They tend to speak of the “tactical” as if it were a fait accompli; as a kind of apodeictic fact: as something that everyone - their customers, interlocutors, fellow forum members, or YouTube viewers - experiences on a regular basis, in everyday life. They tend to speak of the tactical as reality.
There's one distinction I want to add here. There are reasonable scenarios where a pump action shotgun or a reliable revolver might get a rural homeowner, a clerk at a convenience store or a business traveler who can't always avoid risky itineraries out of trouble.

But when we talk about "tactical" weapons, we're no longer talking reasonable scenarios for civilians. Regular people don't need thirty round magazines and laser sights to defend themselves. They need these things to live out fantasies, scenes they saw in movies. We're talking about different guns with an entirely different culture.

Wednesday, December 19, 2012

The end was near a long time ago

While working on an upcoming post, I came across this quote from a 1989 NYT profile of Fred Silverman:
He said going back to a network does not interest him. He added that he would tell any executive who took a network job to ''take a lot of chances and really go for it.''

''This is not a point in time to be conservative,'' he said. ''The only way to stop the erosion in network television is to come up with shows that are very popular.''
Given all we hear about how fast things are changing and those who don't embrace the future are doomed, it's healthy to step back and remind ourselves that, while there is certainly some truth to these claims, change often takes longer that people expect.

Experts started predicting the death of the big three networks about forty years ago when VCRs and satellites starting changing the landscape. That means that people have been predicting the imminent demise of the networks for more than half the time TV networks have been around.

At some point, technology will kill off ABC, CBS or NBC, but they've already outlasted many predictions and a lot of investors who lost truckloads of money over the past forty years chasing the next big thing would have been better off sticking with a dying technology.

Ddulites make lousy investors.

Tuesday, December 18, 2012

Medical Education

This letter by Adam Smith is very interesting.  The key excerpt:

What the physicians of Edinburgh at present feel as a hardship is, perhaps, the real cause of their acknowledged superiority over the greater part of other physicians. The Royal College of Physicians there, you say, are obliged by their charter to grant a licence, without examination, to all the graduates of Scotch universities. You are all obliged, I suppose, in consequence of this, to consult sometimes with very unworthy brethren. You are all made to feel that you must rest no part of your dignity upon your degree, distinction which you share with the men in the world, perhaps, whom you despise the most, but that you must found the whole of it upon your merit. Not being able to derive much consequence from the character of Doctor, you are obliged, perhaps, to attend more to your characters as men, as gentlemen, and as men of letters. The unworthiness of some of your brethren, may, perhaps, in this manner be in part the cause of the very eminent and superior worth of many of the rest. The very abuse which you complain of may in this manner, perhaps, be the real source of your present excellence. You are at present well, wonderfully well, and when you are so, be assured there is always some danger in attempting to he better.
It is a rather strong attack on medical licensing and the monopoly priviledges that it creates for both the schools that confer it and the people who are licensed.  The idea of a parallel apprenticeship system is intriguing, if complicated to figure out how to make work.  But the argument about the perverse incentives created by this system are of interest even today. 

Monday, December 17, 2012

A difference in kind

Megan McArdle has a story about pensions today that is too clever by half.  The only problem, is that there is a complete dis-analogy between a firm and a government:

Such a pension fund would, of course, be illegal. And for good reason: we recognize that it is not, in fact, a pension fund. It’s a promise by the corporation to pay its workers at some later date, not a funded pension plan. The company can call this anything they want—trust fund, pension plan, Ponzi scheme—but whatever we call it, we’d recognize it for what it is: a meaningless accounting fiction that does not in anyway enhance the security of worker retirements. And if, say, Verizon tried to fund its pension plan this way, liberals would hit the roof. Because we recognize that a pension fund full of third-party securities is not, in fact, very much like a pension fund full of securities issued by the same entity—corporate or government—that owes you the pension. 
It is true that the Federal government can choose to stop paying Social Security at any time.  And it is also true that money is fungible.  But the main point of differences is that the Federal government has the right to tax the citizens of the United States (and armed forces to back this right up).  No company has access to such a long term asset. 

This has not stopped governments from defaulting in the past and it sure won't stop them in the future.  But this is the same entity that regulates contracts between you and your self funded pension fund.  Why doesn't a financial asset manager decide to cash in all of their clients assets and head off into the sunset?  Well, because it is illegal.  But if there is no government then there is no reason to give back these huge pools of cash.

So, yes, it is possible that the government will turn on you but they have a) a massive, long term asset and b) not much else in the financial markets is likely to survive if they cease to function. 

Sunday, December 16, 2012

Weekend puzzle -- Count the Squares

I really like this one. The solution is easy and requires only basic math and good pattern recognition. If you don't spot the pattern, however, you've got a slog ahead of you.

There are hundreds of squares in this picture with dimensions ranging from 1x1 to 12x12. Exactly how many squares are there?

I've got a more detailed pedagogical discussion of the problem at You Do the Math. I'll post the answer in the comments section of this post later in the week.

Why are there no "Right to Ranch" states?

Here's another installment in the right-to-free-ride discussion. Sometimes it's useful to consider hypothetical extreme cases when discussing a proposed law. Imagine that we took non-right-to-free-ride states and added the following conditions:

If you refuse to make payments to the union representing you, you will NOT be allowed to:

1. go and work for a non-union shop;

2. move to a right-to-free-ride state;

3. go into business for yourself.

You will, in other words, not be allowed to ply your trade under any circumstances unless you pay these fees. What would the Republican position be on these policies? How might conservative justices like Scalia and Thomas rule on the laws?

I turns out that you don't need to speculate because this isn't a hypothetical. If you're an American beef producer, you pay a fee to the National Cattlemen's Beef Association for every head of cattle you sell. You are required to fund lobbying and advertising campaigns ("Beef, it's what's for dinner"). What's more, the Supreme Court upheld the fee in an opinion authored by Justice Scalia.

At the risk of belaboring the obvious, this was never about workers' rights. It was and is about one party using legislation to remove funding from the other party. Talking about it in any other terms is an insult to our intelligence.

And frankly, my intelligence is getting damned pissed off.

Right-to-free-ride state

Since we seem to be in the middle of a right-to-work thread, we should really take a moment to note an important point by Kahlenberg and Marvit in the New Republic:
"Nevertheless, there is an important lesson for liberals and labor in the Michigan story about the power of rhetoric. "Right to work" is a mendacious slogan but a politically resonant one. It's mendacious because everyone in every state has the right to work; the legislation simply gives employees the right to be free riders--to benefit from collective bargaining without paying for it. Yet members of the media mostly employ the phrase without qualification. (Even those that say "so-called" right to work repeat the phrase over and over again.) This past Saturday, the Washington Post'sfront page featured stories on gay marriage going before the U.S. Supreme Court and the right to work debate in Michigan--and a casual reader could assume that both stories were about "rights" ascendant."

Saturday, December 15, 2012

What's wrong with science fiction authors?

They seem to have a strong opinions about people asking for free samples!

In truth, though, carefully edited text is a lot nicer to read and developing text to this level isn't free.

Right to work

I have to admit that I am with Jon Chait here:

Why is it fair to make workers in union workplaces pay an agency fee even if they don’t want to join the union? If you step back and think about it, the focus on this as a matter of personal liberty is kind of silly. On almost every single point of possible discontent you may have with a job — you don’t like the pay, you don’t like the hours, you don’t like your boss, you don’t like wearing a hairnet, you don’t like having ESPN blocked on your work computer even when there’s no work for you to do — the recourse is go work somewhere else. That recourse is also available for people who don’t want to pay an agency fee.
On top of that, there’s an additional recourse available for union-hating workers that isn’t available for most things. If a majority of workers don’t want to be represented by a union, they can vote to decertify the union.
Notice that we have almost no interest on attacking other aspects of employment that reduce freedom for the workers involved.  If unions made life worse for workers they would, on average, wither way if there were enough employment alternatives.  So if you think Unions are uniformly bad then maybe fighting for full employment would be a better choice?  Then everyone could vote (with their feet) for the jobs that they found the most satisfying.  

But curiously the right to work folks don't seem to be agitating for increased government spending to make full employment a reality.  Nor do they seem to be principled anti-deficit folks fighting for more workplace regulation in other arenas.  It's all rather confusing.

Friday, December 14, 2012

New at You Do the Math -- The Eugen Weber Paradox

This is not Harvey Korman...

...which is at least tangentially related to this post about Udacity, educational media, ddulites and my skepticism about the coming revolution.

Thursday, December 13, 2012

"Rove's Dilemma" Graph Game

I suspect that Karl Rove's strategy in 2000 was to use the support of evangelicals and nativists to entrench Republican power then abandon them and transition to other groups, particularly Hispanics. Rove, an agnostic who was close to his gay adoptive father, appears to have had no personal investment in the social conservative wing of the party and though overrated as a strategist, he was certainly capable of following demographic trends.

Twelve years later, that strategy is looking less than doable. The nativists and social conservatives appear to be a shrinking demographic and the idea of winning over Hispanic voters seems increasingly unlikely. Some have made the case that the GOP's best strategy at this point is abandon its shrinking base and make a big play for the next demographic wave. The trouble with that strategy (from a strictly strategic viewpoint) is that a party has to maintain a certain critical mass to remain viable.

All this got me thinking about the best way to describe this. Here's what I've come up with. It's not there yet but I think it's on the right track.

1. We have a graph where each node is associated with a shifting size metric.

2. These nodes represent populations. You win support from these populations with messages.

3. If you target two nodes with the same message those nodes are associated

4. If the message targeted to one nodes raises support in that node and lowers it in another, those two nodes are disassociated

5. You can not target disassociated nodes.

6. Messages have a half life. If you wish to drop a node so you can start targeting one disassociated with it, you will have to wait a few cycles.

7. The objective is to get the most support possible support over the run of the game while never letting that support fall below a critical level.

Other than a little with Bayesian networks (which doesn't quite seem to cover this) I've never done a graph-based simulation so the odds are good that I'm stating or missing the obvious here. Still, the idea of graph as shifting fitness landscape might be interesting and it does explain how GOP strategists could have seen the coming demographic tides and could still have found themselves trapped by the rising water.

...worth a thousand words

A graph that really makes its point.

From James Lawrence Powell via Tim Haab by way of Thoma

Wednesday, December 12, 2012

This is simply remarkable

Aaron Carroll:
Lots of those 65 and 66-year-olds will need Medicaid. That will cost the federal government about $8.9 billion. Lots of those seniors will go to the exchanges for insurance. That will cost the federal government about $9.4 billion in subsidies. Oh, that Medicaid will cost states too, about $700 million. The 65 and 66 year olds getting  insurance from their employers will cost them about $4.5 billion (they’re expensive). As I’ve reported before, Medicare premiums will go up ($1.8 billion), and exchange premiums will go up ($700 million). And, there will be increased out-of-pocket spending by the 65 and 66-year-olds themselves for premiums, deductibles, co-pays, etc. Add it all up. To save the federal government $24.1 billion, we need to spend $29.8 billion.
 Even if some of these assumptions are naive, it doesn't look like this change is about saving money at all.  Instead, I am beginning to accept Peter Sunderman's claim that it is all about the long game of decreasing the consituency for the program.  But as a practical matter, it seems like this is a fairly stiff price to pay for a symbolic gesture that will make things more dificult for people to get medical care.  I am not even convinced that it does much to increase the work incentive -- moving social security to age 67 likely did most of the heavy lifting on this front already. 

Peter Principle or Dilbert Principle*

In case you haven't heard, Jeff Zucker has just been named president of CNN. Since we've been discussing incompetent executives lately, this seems like a good time to ask how, despite huge stakes, fierce competition and multiple layers of screening, incompetents still sometimes manage to make it to the top of large corporations.

At first glance, Zucker would appear ot be a perfect example of the Peter Principle, an effective producer promoted past his talents, but when you look closer at Zucker's one big accomplishment, the resurgence of the Today Show, you see less proof of competence and and more evidence that corporate reputations are often built on unrepresentative baselines, delayed effects, external factors and the tendency to embrace appealing and established narratives.

First some background via Wikipedia (as are all block quotes unless otherwise noted).
In 1989, [Zucker] was a field producer for Today, and at 26 he became its executive producer in 1992. He introduced the program's trademark outdoor rock concert series and was in charge as Today moved to the "window on the world" Studio 1A in Rockefeller Plaza in 1994. Under his leadership, Today was the nation’s most-watched morning news program, with viewership during the 2000-01 season reaching the highest point in the show’s history. ... In 2000, he was named NBC Entertainment's president.
Sounds pretty good, but remember two things that happened at the Today Show in 1990 an 1991. The first was a disastrous transition from Jane Pauley to Deborah Norville. You could make the case that Norville was actually better qualified for the job, but that did nothing to soften the viewer reaction. The younger Norville was seen as taking advantage of looks and youth to steal Pauley's position. Saturday Night Live even did a sketch entitled "All About Deborah Norville."

The ratings took a hit from the debacle, but Norville was soon gone, setting the stage for an upturn. That recovery was all but guaranteed by the hiring in 1991 of Katie Couric, a journalist who could have been genetically engineered to host a morning show.

Whoever got the producer's gig in 1992 was almost certain to oversee a substantial rise om ratings as the memory of the debacle faded and Couric started bringing in viewers. Now add in what was going on at Today's significant competitor.
Good Morning America entered the 1990s with its overwhelming ratings success. Gibson and Lunden were a hard team to beat. But Good Morning America stumbled from its top spot in late 1995. Lunden began to discuss working less, and mentioned to network executives that the morning schedule is the hardest in the business. ABC executives promised Lunden a prime time program; Behind Closed Doors would be on the network schedule. On September 5, 1997, Lunden decided to step down after seventeen years on Good Morning America and was replaced by Lisa McRee. Gibson and McRee did well in the ratings. However, ratings sharply declined when Gibson also left the show to make way for Kevin Newman in 1998. With McRee and Newman as anchors of Good Morning America, long-time viewers switched to Today, whose ratings skyrocketed and have remained at the top spot since the week of December 11, 1995.
In other words, Zucker started with an artificially low baseline, was handed a major TV personality on the verge, and saw his competition fall apart at exactly the right time. All of the important drivers of the show's success were things he had nothing to do with.

Just to be clear, many, probably most CEOs get their jobs because they are smart and capable and add value to the company, but there are other ways to  succeed in business. You can:

Fit in with the culture;

Make the right friends;

Couple your career with rising leaders and initiatives;

Fashion a persona that complements the favored narratives;

As for that last one, the legend of the studio boy wonder runs deep in the entertainment industry, from Thalberg to Silverman. When Zucker was put in charge of Today in his twenties and NBC in his thirties, he tapped into something both familiar and resonant.

But Thalberg and Silverman really were boy wonders who had laid down impressive resumes before they were put in charge. Zucker only had the perception of success. Sometimes, though, that's enough.

* Technically not the Dilbert Principle, but close.

Tuesday, December 11, 2012

Right to work is anything but!

This is right on:
Now naturally an employer's not going to want to agree to that. But he's not going to want to agree to higher pay or more vacation days either. That's why it's a negotiation. A right-to-work law is a law banning employers from making that concession.
The impact, obviously, is to make it hard to form strong unions in a given jurisdiction and thus make it a more business-friendly jurisdiction. But note that this same trick works across the board. You could just ban pay raises in general. Any one firm, after all, faces a dilemma. On the one hand it would be more profitable to pay people less. On the other hand, it's also unprofitable to have everyone quit to go work for some other higher-paying company. So a law against pay raises would make everyone more profitable, spurring crazy business investment and job creation. Except nobody does that because it would be (a) insane and (b) obviously unfair. And yet the proponents of right-to-work laws are generally exactly the people most inclined to stand up for freedom of contract under other circumstances.
 It is very odd that right to work has become a libertarian position.  Employment contracts require both sides to be able to bargain freely.  In an "Adam Smith" nation of shopkeepers, where an employee is bargaining with a single business owner there is some sort of sense to this arrangement.  But business has a lot of money and power relative to workers.  Why would it be illegal to try and pool resources on the worker side when it is fine for a pool of owners to pool resources by forming a corporation?  Do we honestly think that corporations don't also try and influence the political process via lobbying and donations just as unions do? 

Wage controls are also not without precedent but at least the contradiction is more immediate and obvious.  Now, I am not a libertarian but I would really like to know how this sort of ban on organizing groups is a loss of freedom.  Sure, you could end up belonging to a group you did not intend to join.  But we don't ban corporate takeovers because the workers never consented to being a part of the acquiring company, do we? 

I'm a Mac... and I'm about to be eaten by feral dingoes

Yet another reason to stick with the Android.

Monday, December 10, 2012

Hostess Pensions

Some choice quotes from a Yahoo! News article on the Hostess pension plan hijinks (called by one person in the article a "Betrayal without remedy):

Mr. Rayburn became chief executive in March and learned about the issue shortly before the company shut down, he said. "Whatever the circumstances were, whatever those decisions were, I wasn't there," he said.


It might have been "impossible" to undo the agreements that called for Hostess to make pension contributions using employee money, Mr. Rayburn added. One reason: Hostess could have been too short of cash to make up the difference, though he said he isn't sure.
Now it is true Mr. Rayburn was a last minute CEO and may well have ended up leading a company that was in worse shape than expected.  But how he could not know basic details like whether the company had enough cash to meet obligations or not suggests a lot of disarray.  I understand that precise numbers in a large business are illusory, but he should have a decent idea given that it was a major sticking point of union negotiations.

Similarly, claiming that he can't take responsibility for previous bad decisions makes sense at some level.  But at another level it makes me wonder how anybody can negotiate with an entity that can change management and request a clean slate on past misdeeds since the current teram was not there.

Krugman versus Yglesias

Paul Krugman has two explanations for why profits are rising at the expense of workers:

As best as I can tell, there are two plausible explanations, both of which could be true to some extent. One is that technology has taken a turn that places labor at a disadvantage; the other is that we’re looking at the effects of a sharp increase in monopoly power. Think of these two stories as emphasizing robots on one side, robber barons on the other.
Matt Yglesias advances what I think is a better explanation:

 To put it nonpolemically, you can see in the chart that not only is there a structural trend in the labor share of output, there's also a strong cyclical trend. The labor share declines during recessions and rises during booms. And the problem of the Federal Reserve is that over the past 30 years, it has a perfect track record of never allowing inflation (which is to say a sustained period in which wages rise faster than productivity), but it doesn't have a perfect track record of never allowing recessions. The inevitable consequence of this asymmetrical success is for the labor share to steadily decline.
I like this argument for the simplicity: it is based on a clear policy choice that was made and which continues to this day.  There is no need for an appeal to "the world has changed" over and above the decision to be willing to hold price stability in place at the cost of employment.  This is directly relevant to today as the Federal Reserve has a dual mandate.  On one side of the mandate, they have the requirement to ensure price stability.  On the other side of the mandate they need to encourage full employment.  It's pretty clear that they are doing much better with one piece of this mandate than the other. 

Sunday, December 9, 2012

To spell out or not to spell out

I just posted some thoughts on the pros and cons of teaching algorithms for solving math problems. It's  fairly long but here's a quick look:

It takes a great deal of thought to come up with an algorithm and to understand why it works, but actually performing one should be an almost entirely mechanical process. The whole point is to get the answer reliably and quickly with an absolute minimum of thinking. 
This isn't a bug; it's a feature. There are situations where you want people operating on autopilot. Thought is slow, unpredictable and distracting. You probably don't want your tax preparer stopping to reflect on the subtleties of economic distortions while filling out your 1040 and if you're an administrator, you certainly don't want students thinking about the nature of numbers while doing long division on a standardized test that determines your next bonus. You could even argue that most of the progress of mathematics over the past three centuries is due to notation that makes much of the work thought-free thus allowing mathematicians and scientists to focus on more important matters.

If you're a teacher or just someone with an interest in math education, check it out and let me know what you think.

Saturday, December 8, 2012

First they come for the pigeons...

Well this is ominous:
The study, conducted largely by scientists from the University of Toulouse, is titled "Freshwater Killer Whales" and describes the beaching behavior of European catfish on the River Tarn in France. The study's abstract section states: "Among a total of 45 beaching behaviors observed and filmed, 28 percent were successful in bird capture."

Friday, December 7, 2012


I was struck by this comment in Dean Dad's column:
The Times article quotes someone saying he knows people with six figure incomes from dog-walking businesses.  I don’t.  And I bet you don’t, either.
What is interesting about this quote is that I see a six figure income as a dog walker as plausible if you are talking revenues.  Dog walker is actually a tough gig if you are going to make it a profession and give high end experience.  You need a vehicle to transport the animals to a dog park.  You need insurance (liability and personal health).  You need to be bonded as you likely have access to people's houses when they are at work.  You are dealing with beloved animals who may well decide not to listen at a key time. 

So it takes trust and there is a lot of effort involved in building that reputation.  The effect of reputation (do you trust the professional or the new person) is an effective barrier to entry to new dog-walkers who simply can't just compete on price.  So this creates a space for some dog-walkers to end up able to generate a lot of revenue. 

But this sort of heavily networked profession is more than just picking up dog poop.  It's also about controlling lots of animals, knowing how to run a high liability business and developing an impeccable reputation -- being able to sell yourself as trustworthy.  Ending up in that sort of profession where you are the real product being sold (musicians, car salesman, model) has always been a alternate path to a decent income.  But there is a huge element of risk in all of these approaches and not everyone can end up with a highly successful network. 

Value Investing

I often cringe at Megan McArdle's social commentary, but she seems to be pretty sharp at the level of the small, individual investor.  I especially liked her comments on value investing:

Graham's big investment coups came in the early 1930s, when the market was so depressed it was literally possible to buy some stocks for less than you'd get if you just shut the place down and sold off the assets.  Buffett similarly made a lot of money in the prolonged bear market if the 1970s.  And except for a brief period in late 2008 and early 2009, the market has simply never dipped low enough for investors to make those kinds of profits.  To be sure, 2009 was a great year for value investors, but you cannot build an entire financial career off of a single nine-month period.
I think that this is precisely correct.  To gain huge returns (on the order of Buffet) one really needs a stock market so depressed that extraordinary value is just sitting there.  I remain skeptical that such opportunities exist (in general) where everyone is looking.  In modern, carefully analyzed and highly liquid stocks this seems to be a hard thing to pull off. 

It's also the case that it can't be done consistently.  A single mis-priced stock could end up never appreciating in value or, even worse, could suffer an adverse change (say of CEO and thus strategy).  You need to be able to play the numbers (trying for many opportunities and relying on the law of large numbers) and that probably requires systematic under-pricing. 


There has been a recent discussion of Ross Douthat's column on women in the West not having enough children and how this is a sign of decadence.  I want to outsource one major objection to this column to James Joyner, who is far from a liberal.  Consider:
Or, as some of our fellow conservatives call it, “taking responsibility for their lives and not having more children than they can afford.” Indeed, Douthat seems to acknowledge that on the part of the individual while lamenting the collective outcome.
But this is an argument that we conservatives apply nowhere else that I can think of. Indeed, most American conservatives, myself included, rail against collectivism in much less significant arenas. Let government try to force us to change to a more energy efficient lightbulb or regulate the water capacity of our toilets and the calls for revolution ring out across the land. Encourage us to buy more energy efficient automobiles through tax incentives and corporate subsidies and you’re a tyrant. Suggest that we turn off electronic devices that aren’t in use and you’re at very least a dirty hippy and probably an out-and-out commie. But suggest that women give up the advances they’ve made over the last half century because somebody has to have more kids, why, what could be more reasonable?
I think that this point is completely correct.  Even worse, there are a lot of policy choices that we could implement if we decided (as a society) that we wanted more children.  Just consider programs like "Aid to Families with Dependent Children" that could be reinvented.  Or we could subsidize childcare.  Or add in mandatory long maternity leaves.  Why not use incentives to try and improve collective outcomes?

Sunday, December 2, 2012

Golden Sponge Cake with Creamy Filling and Moral Hazard

One argument for high executive compensation is that there is a high degree of risk that comes with the job. In theory, great success is met with great reward, mediocre results produce much lower compensation and anything short of that will cost you your job. This is supposed to align the interests of executives with those of shareholders -- my portfolio loses value and you lose your job -- but where we have risk, it probably makes sense to talk about moral hazard.

Think about the familiar argument against a one-time government program to bail out underwater homeowners. Even if we included a provision saying that the homeowners sign away the right to take advantage of similar programs in the future, you could still argue that the program presented a moral hazard because it set a precedent. To put it in plainer terms, it's a bad example. Other prospective homeowners will look at this and be more likely to buy more house than they can afford or, worse yet, pick up a copy of Rich Dad and start flipping houses.

We can argue about the validity of the argument in this particular case (I have problems with it myself), but we should all be able to agree on at least two points:

1. there are potential problems to shielding people from negative consequences;

2. these problems extend, not just to those shielded but to those who think they may be shielded in the future.

All of this suggests that the numerous and highly publicized examples we've seen of executives getting better compensation than their performance merits might be doing damage even in companies that really are aligning   pay and performance.

And it makes stories like this truly scary:
The latest news from the bankruptcy front at iconic Twinkies maker Hostess Brands: AP is reporting that the Irving, Texas company is planning to ask a bankruptcy judge to grant approval of bonuses totaling up to $1.8 million for its executives. Hostess says the incentive pay is necessary to assure that the 19 managers in charge of the liquidation process remain on board until the wind-down is complete. Hostess wants to make two of those executives eligible for additional financial rewards, depending on how efficiently they carry out the liquidation.
What's so disturbing here is the fact that the horrible condition of Hostess is actually an argument for increasing the potential compensation of the executives who drove it into the ground. They managed the company so badly it supposedly can't survive a change in management.

That's what I call a bad example.