Friday, July 26, 2013

Chait versus Delong

Brad Delong has a very nice reply to Jon Chait's latest column.  I think that any reasonable analysis of tuition versus professorial salary in real terms is going to make it clear that the link between salaries and tuition is . . . weak.  After all, we live in the age of the adjunct professor/lecturer -- if just slashing salaries was the answer then adjuncts would have solved the problem.  Just ask anyone who has taught at those salaries. 

It is also the case that private schools (i.e. Harvard) give tenure and pay high salaries.  It cannot just be that the private sector has no ability to assess value.  Not have I missed the fact that summer camp seems to be increasing in price, making it also less accessible.  Nor is anybody suggesting that replacing summer camp with a television set is the way to improve educational outcomes. 

Yet we have had VHS tapes since the 1980's.  If taping and replaying lectures was so superior, why did it take the internet to make this a viable approach? 

The one place that I think Jon Chait is correct is when he draws the analogy with Health Care.  Some parts of health care have been real improvements and were worth the cost.  Conspiracy theories aside, comparing cardiovascular disease treatment between 1960 and today is sobering; expensive technologies and drugs really have helped.  What is challenging is the fine line between what is a real improvement in quality and what is an unnecessary expense.  Some degree of pressure on higher education may elucidate some of the fault lines and that could be quite useful to all of us. 

Thursday, July 25, 2013

The Ryan Reynolds Effect

Sometimes the potential of a product, strategy or market will become an idée fixe in a company or even an entire industry. In the case of movies and television, this often shows up as the conviction that a certain actor is going to be big, despite an apparent lack of interest from the audience. This is related to but not quite the same as the previously mentioned Paula Marshall effect where producers continue to cast actors who perfectly match a type even though casting against type may offer greater returns (that falls closer to the safe failures Pauline Kael described here).

This idée fixe is more of a collective belief that's resistant to evidence, specifically the belief that a given performer can convince people to watch or listen to something which they would otherwise ignore, a quality known as being bankable.

Though bankability can be difficult to measure, the belief in it can be evidence-based. I've heard that it was Sweet Home Alabama that made Hollywood take Reese Witherspoon seriously as a star. She had already been critically acclaimed for Election and had starred in the hit Legally Blonde, but the general feeling was that the success of Sweet Home ($180 million off of a $38 million budget) was almost entirely due to Witherspoon, theat she "carried" the movie. With all due caveats about counterfactuals, that was, by Hollywood standards, a remarkably reasonable conclusion.

You can make similar arguments for the bankabililty of any number of actors from Chaplin and Pickford to John Wayne to Adam Sandler. This isn't a comment on quality -- the only Sandler scene I really enjoyed was the one where Bob Barker beats him senseless -- but I think there's a very little question that he does bring people into see a certain kind of movie. As Burt Reynolds once said it takes a particular kind of talent to make chicken salad out of chicken shit.

It's one thing to say that certain stars are bankable;  it's quite another to say that this quality can be spotted in advance. Nonetheless, Hollywood often manages to convince itself that it has spotted the next big thing.

This phenomenon is easy to spot. it invariably involves a young actor, usually blandly attractive, sometimes at the center of some kind of buzz (buzz-generated next-big-things are often attractive, but non-attractive next-big-things are almost always buzz generated --see Jack Black) getting an inexplicable string of choice roles.  The 90's actor Farrah Forke is an excellent example. For a few years Forke was all over network television. She was a regular on Wings. She have a recurring role on Lois and Clark. and she starred in two short lived sitcoms in  rapid succession, all despite no apparent interest from the viewers. After this flurry of activity, her career dropped off sharply and she left acting all together.

That's a fairly standard career path for a next big thing. There are exceptions. I seem to recall reading that the executives at NBC insisted on Jane Leeves being cast in Frasier because they had high expectations for her so you might argue that she was a next big thing who panned out, but other than a very obscure syndicated sitcom, Frasier was Jane Leeves first real exposure and she was a hit from the very start.

You can also fine cases of stars who required many roles before they found one that clicked. George Clooney comes to mind. but Clooney was less next big thing and more lost cause,  toiling away in obscure roles until finally getting a hit with ER (believe it or not the second show by that name he has appeared in).

The classic next big thing career arc, where the actor is given better and better rolls despite a lack off audience response,  almost never ends well which brings us to Ryan Reynolds.
The early returns are in, and it looks like a weekend to forget for Ryan Reynolds.
The actor’s two new films -- the animated “Turbo” and the sci-fi comedy “R.I.P.D.” -- are off to dreadful starts at the box office, according to rough estimates based on Friday matinee figures.
The $130-million “R.I.P.D.” may struggle just to gross $10 million in its first three days of release, the estimates show. And the even more expensive "Turbo" could gross barely double that.
...
“Red 2” should also surpass the $135-million “Turbo,” from DreamWorks Animation and distributor 20th Century Fox, which looks to gross $20 million or less in its first three days in theaters (the film opened Wednesday). “Turbo’s” desultory launch will probably be below the third weekend's returns for “Despicable Me 2.
(And in case you're wondering $135 million is on the high end for a non-Pixar computer animated film. Despicable Me 2 had a budget of $76 million.)

We probably shouldn't put too much blame on Ryan Reynolds shoulders for the failure of Turbo. One of the lessons of Pixar is that animated films are not particularly star driven. It is reasonable to assume that the success of the Incredibles and Finding Nemo cannot be credited to the box office clout of Craig T Nelson and Albert Brooks. Still, it is also reasonable to assume that having Ryan Reynolds as the lead of the picture did not do a lot to drive ticket sales.

By the same token but to a lesser degree, there's a limit to Reynolds' role in the failure of RIPD. This was a badly reviewed, badly marketed movie that obviously had studio executives nervous well before its release. Part of that nervousness was due to the lack of interest Reynolds was inspiring, but it's not clear that a different actor could have made this a break-even project.

What we can conclude, again with the usual counterfactual caveats, is that there is no evidence that Reynolds can pull people into the theater. That is not to say he chases people out -- Safe House and Wolverine did good business while the Sandra Bullock romantic comedy The Proposal broke an astounding $300 million worldwide -- but when it comes to what we might call Reynolds' "Sweet Home" moments, films such as the Change-up that largely depend on the star's pull, he has consistently underperformed.

The point here is not that Reynold's career is hopeless partially because many would-be next-big-thing leading men (Robert Preston, Leslie Nielsen) have reinvented themselves after faltering careers but mainly because there's a bigger issue here. Despite their reputation for rationality, corporations and even entire industries are as vulnerable to extraordinary popular delusions as the rest of us are. These beliefs are usually fundamentally optimistic, revolving around the some promising development. It can be faith in the potential star power of an actor despite major box office disappointments or the conviction that a new business model can save an industry despite the fact that no one can explain how it can make any money.

Wednesday, July 24, 2013

Intellectual property and genes

More on intellectual property:

Advocates of tough intellectual property rights say that this is simply the price we have to pay to get the innovation that, in the long run, will save lives. It’s a trade-off: the lives of a relatively few poor women today, versus the lives of many more women sometime in the future. But this claim is wrong in many ways. In this particular case, it is especially wrong, because the two genes would likely have been isolated (“discovered,” in Myriad’s terminology) soon anyway, as part of the global Human Genome Project. But it is wrong on other counts, as well. Genetic researchers have argued that the patent actually prevented the development of better tests, and so interfered with the advancement of science. All knowledge is based on prior knowledge, and by making prior knowledge less available, innovation is impeded. Myriad’s own discovery — like any in science — used technologies and ideas that were developed by others. Had that prior knowledge not been publicly available, Myriad could not have done what it did.

I think that this is one of the themes of intellectual property arguments; the advocates claim that the huge positional benefits of these policies (in generating revenue for incumbents) are necessary to encourage progress.  But it makes all sorts of tough assumptions, like incumbents will deploy these resources to encourage social benefit. 

One thing that makes a lot of sense is to look at times and places that showed evidence for fast growth and innovation.  It seems that tight trade guild rules, for example, seem to be anti-correlated with fast progress on industrial or technological progress.  That should be a cautionary note.

And, as Mark has carefully noted before, there isn't a binary choice here between massive intellectual property protection and no intellectual property protection.  There are some pretty reasonable middle ground positions that are less extreme than the modern regime but still protect the rights of private discovery.  Nor should we entirely rule out government funded research programs -- these can be much less expensive than the private sector (see the NSF, for example) but still ensure that innovation is not under-supplied. 

Nate Silver didn't have a problem with old-style journalist values, but he may have had a problem with the new ones

New York Times pubic editor Margaret Sullivan speculates on the possible reasons for Nate Silver's decision to move 538 to Disney:
I don’t think Nate Silver ever really fit into the Times culture and I think he was aware of that. He was, in a word, disruptive. Much like the Brad Pitt character in the movie “Moneyball” disrupted the old model of how to scout baseball players, Nate disrupted the traditional model of how to cover politics.

His entire probability-based way of looking at politics ran against the kind of political journalism that The Times specializes in: polling, the horse race, campaign coverage, analysis based on campaign-trail observation, and opinion writing, or “punditry,” as he put it, famously describing it as “fundamentally useless.” Of course, The Times is equally known for its in-depth and investigative reporting on politics.

His approach was to work against the narrative of politics – the “story” – and that made him always interesting to read. For me, both of these approaches have value and can live together just fine.

* A number of traditional and well-respected Times journalists disliked his work. The first time I wrote about him I suggested that print readers should have the same access to his writing that online readers were getting. I was surprised to quickly hear by e-mail from three high-profile Times political journalists, criticizing him and his work. They were also tough on me for seeming to endorse what he wrote, since I was suggesting that it get more visibility.

Many others, of course, in The Times’s newsroom did appreciate his work and the innovation (not to mention the traffic) that he brought, and liked his humility.

* The Times tried very hard to give him a lot of editorial help and a great platform. It bent over backward to do so, and this, too, disturbed some staff members. It was about to devote a significant number of staff positions to beefing up his presence into its own mini-department.
Conventional wisdom holds that traditional print journalism is in trouble because it has failed to adequately change with times, but if the reasons given by Ms Sullivan did have something to do with Nate Silver leaving the New York Times, very much a point open to debate, then you can  argue that in this case print journalism is in trouble because it has changed, just in an unfortunate direction.

Look at the aspects of the NYT culture which Sullivan lists as potential sources of conflict:

Fixated on polling and the horse race;

Pundit-centric;

Narrative obsessed;

Tribal;

Hostile to criticism.

These represents decades-long trends in journalism. Each has gotten more pronounced with every election cycle. Polls have proliferated. Horse race speculation continues to creep forward. More and more "news" consists of pundits confidently stating opinions. Narratives enforce increasingly pervasive groupthink on the media. Clannish social dynamics overpower journalistic ethics and propriety. Press criticism has become the the domain of apologists like Jack Shafer and David Carr.

What's amazing here is just how much Sullivan has internalized this Twenty-First Century NYT culture. She describes an institution dominated by infantile trivialists and it never seems to occur to her that the rest of us we see this as anything other than 'there are two sides to every story.'

Tuesday, July 23, 2013

Personal finance

There is a very good post by a California tenured professor about the challenges of making ends meet in an expensive city.  It's true (and she admits it) that the student loan decisions were not ideal.  Bu it is also remarkable how many of the expenses of her position get shifted on to her.  In the private sector, I can't imagine being told to pay for a required conference yourself. 

Personal blogroll -- Rocco Pendola

I really shouldn't like this guy. For one thing he writes for the Street which puts him zero degrees of separation from Jim Cramer. He also sometimes appears as a talking head on CNBC, another huge black mark. But as I continue doing more research for various media posts, I keep coming across relevant Pendola articles that manage to cut directly to the central questions.

He has that increasingly rare facility of recognizing the obvious when the obvious does not match the official narrative. He questions the rather odd numbers coming out of Netflix. He realizes that any story about the business side of Hulu has got to revolve around the who owns the company. He understands the importance of not putting too much weight on absolute numbers when looking at the Internet and instead take things in context. He makes necessary distinctions between different types of royalty particularly the royalties paid to performers versus the royalties paid to songwriters.

I suspect this independence comes in large part because Pendola has a very different background then most of the people who write financial news. One of the big recurring themes at West Coast Stat Views is just how insular and inbred the journalistic community has become. This is if anything a bigger problem for financial journalists. A majority of the voices you read in Forbes or Business Insider or Bloomberg have basically the same background, were educated at the same very small set of schools, have had similar career tracks, live In the same region (and often in the same neighborhoods), read the same publications, and frequently have a common social circle.

The result is a monoculture and just as having fields upon fields of the same species of corn makes it prone to outbreaks of  blight, having a journalistic community made up of remarkably similar people makes it vulnerable to bad narratives and questionable memes.

Rocco Pendola Is a former radio producer, DJ, and sports talk show host, who went on to get an urban planning degree at San Francisco State and then settled in Southern California. The result of that unusual resume is a certain level of resistance to those journalistic blights. The following rant about the coverage of Pandora, though somewhat overheated, nicely illustrates his outsider perspective.
Already this week we have seen Business Insider publish an eight-month old blog post passing it off as "today's" news. Then there was Greg Sandoval's Pandora hit job over at The Verge where he passed off Tim Westergren dining on a "truffle-infused Kobe beef burger" with investment bankers as somehow germane to the royalty conversation. Earlier in the same article, Sandoval passes off the inability of All Things D's Peter Kafka to conduct real reporting as a pockmark against Pandora. Of course, that's what they teach you in Journalism 101: Tweet the CTO of a public company to get answers to your most vexing questions. 
This is where we are -- next these guys will pick through Westergren's garbage and produce smashed vinyl copies of Pink Floyd's Dark Side of the Moon. They'll mark it as "EXCLUSIVE" or "BREAKING" and take pats on the back from the clique of colleagues and "industry sources" they work so feverishly not to piss off. Forget doing actual work to get the real story or find something closer to the truth; it's not about the reader, it's about the personal relationships they maintain that the general public couldn't care less about. 
We have come to a point where not only in this story, but, sadly, in the broader scope, journalists routinely make something out of nothing and expect companies to spoon feed them information in lieu of doing actual journalism.

Credit where credit is due...

I don't know if I'll have time to give this the attention it deserves, but this long but worthwhile post by Noam Scheiber hits on at least a couple of our big recurring  threads: the ways that businesses and industries get into trouble; and the importance and difficulty of getting incentives properly aligned.

Here's an example:
There was frustration with other aspects of the new compensation system, too. Previously, partners were reluctant to ask colleagues to help on their pitches, because credit was a zero-sum game: If a partner landed the business, she would have to award some of the credit to the colleague, leaving less for herself. Under the new rules, the firm allowed the partner to claim up to 100 percent of the credit herself, then dole out up to 100 percent more among any partners who had helped. 
This encouraged collaboration at times, according to several former partners. The downside was that many began to view the additional 100 percent worth of credit as a slush fund, ladling it out to friends with little role in their cases or transactions. “It led to sleazy deals,” recalls one former partner. “It took about thirty seconds for people to figure it out.” Says a former finance lawyer of two senior partners in his group: “I saw the billing going around. One was getting credit on stuff the second opened, and the second was getting credit for stuff the first one opened.” There seemed to be no way around it: The more Mayer Brown set out to fix its problems, the more deviously its partners behaved.


Monday, July 22, 2013

Urban Sprawl

Mark Thoma's site has a link to Paul Krugman's discussion of the association between sprawl and low social mobility.  It appears that if you do a plot of urban density versus social mobility of the lowest quintile to the highest quintile you get a very surprising linear relation: as density drops it looks like persistent inequality rises.  Paul Krugman is appropriately skeptical that this is the whole story:
Is the relationship causal? You can easily think of reasons for spurious correlation: sprawl is associated with being in the sunbelt, with voting Republican, with having weak social safety net programs, etc.. Still, it’s striking.

Matt Yglesias adds additional data about what happens with kids who move into high density urban areas as well as a few other possible explanations:
So what drives this? The study does not really make a high-powered effort to draw strong causal inferences. But the study does show that kids who moved into a high-mobility area at a young age do about as well as the kids born in high-mobility areas, but kids who move as teenagers don't. So there seems to be a factor that isn't parent-driven. The authors report that tax policy, the existence and affordability of local colleges, and the level of extreme local wealth do not appear to be strong correlates of intergenerational mobility. Metro areas where the poor are geographically isolated from the middle class have less intergenerational mobility, while metro areas with more two-parents households, better elementary and high schools, and more "civic engagement" (measured through membership in religious and community groups) have more.
 So clearly it would be a mistake to over-interpret these data. But they do have one major policy piece embedded into them -- it makes absolutely no sense to subsidize sprawl as a positive good.  It may not be worth it to try and discourage it, but generally there are a lot of laws (think zoning laws and car centered transportation grids) that implicitly subsidize sub-urban communities.

There are still pieces to be considered -- like does the poorest quintile do objectively better or worse in the low social mobility environments (you can justify low mobility if everyone is better off as a result).  However, the two extremes in Paul Krugman's graph are Atlanta (low density and mobility) and Los Angeles (high density and mobility).  It's not 100% clear that it is better to be poor in California than Georgia, but it isn't like it is far worse in California so far as I can tell.  Maybe Mark can weigh in here? 

But this all points to a big picture that urban planning is actually a much bigger deal than I had previously realized. 


Not really movie people...

Signed up for Netflix recently and one of the things I've noticed is that their blurbs often do an extraordinarily bad job at pitching. Compared to the capsules you'll find from Maltin or TV Guide, they have a tendency to leave out the details that would make a person who would enjoy a movie actually go ahead and watch it.

This is a bigger deal than you might think. Consider the Netflix viewing model:

1. I watch a show;

2. Netflix recommends other shows;

3. Based on the information provided and what I know about the show, I decide whether or not to pay the the cost in time required to watch it.

It isimportant to note that this process adds the most value when it gets me to watch and enjoy a show that I was previously unaware of. Here's where a good blurb is vital. Even if the recommendation model works perfectly, the process is a failure when I don't watch the film it recommends.

With that in mind, check out this Netflix description:
Royal Deceit(Prince of Jutland)
1994R1hr 25m
Young Jute Amled avenges his father's murder at the hands of his dissembling uncle, only to be banished to Scotland for the deed. But Amled has other plans, and soon practices a little deception of his own.
Cast: Gabriel Byrne, Helen Mirren, Christian Bale
Genre: Action & Adventure, Dramas, Adventures, Crime Action & Adventure
This movie is: Dark
With the exception of some interesting names in the cast, there's nothing (at least nothing explicit) that would make me want to watch this. That would represent a significant failure for Netflix because the right blurb (from TV Guide) could and did make me watch Royal Deceit on ThisTV a few months ago.

That summary mentioned that the film was directed by Gabriel Axel (1987's BABETTE'S FEAST).

It included Brian Cox, Kate Beckinsale and Tom Wilkinson in the cast list.

And most importantly, it pointed to the aspect of the film most likely to interest the target audience. You may have guessed this part, but if not, here's a hint: in various retellings of the legendary story of this Danish prince,  Amled is often spelled Amleth.

I don't have the TV Guide blurb in front of me but I believe it read a great deal like the first paragraph of their review.
A reworking of Shakespeare's Hamlet, ROYAL DECEIT may lack the Bard's lyrical dialogue but it does boast some sensational action sequences and a truly top-notch cast.
I really do want to see Netflix succeed -- it's a good service for the price and significantly diversifies the media landscape -- but I can't get past the suspicion that the people behind Netflix are bored with the actual business of running the company. They like the part where people call them visionaries and give them a gazillion dollars for their stock, but the low glamour stuff makes their eyes glaze over.

This is one of the reasons I keep going back to Weigel as an example. They obviously love the details (Joe Dale has apparently memorized thousands of TV episodes). When Steve Jobs famously got upset because a headphone jack on an IPod didn't have a satisfying click, he was illustrating this same mentality. You saw similar attitudes in Sam Walton and Don Tyson.

I'm not suggesting that we should judge management based on one anecdote, but the more I look at the company the more it looks like they don't care about details like blurbs and audience demographics. and as someone who wants to see Netflix make it, that wories me.

Sunday, July 21, 2013

Safe failures from "Why Are Movies So Bad? Or, The Numbers"

I'd been meaning to spread out the Pauline Kael posts, but I realized that this 1980 essay about the relationship between corporate culture, misaligned incentives and cinema has an extraordinary number of salient points for some of our ongoing threads.

For example, Kael explains how, under a fairly common set of circumstances, it can be in an executive's best interests to opt for a project with higher costs and lower potential returns.

Why Are Movies So Bad? Or, The Numbers
There is an even grimmer side to all this: because the studios have discovered how to take the risk out of moviemaking, they don’t want to make any movies that they can’t protect themselves on. Production and advertising costs have gone so high that there is genuine nervous panic about risky projects. If an executive finances what looks like a perfectly safe, stale piece of material and packs it with stars, and the production costs skyrocket way beyond the guarantees, and the pictures loses many millions, he won’t be blamed for it—he was playing the game by the same rules as everybody else. If, however, he takes a gamble on a small project that can’t be sold in advance—something that a gifted director really wants to do, with a subtle, not easily summarized theme and no big names in the cast—and it loses just a little money, his neck is on the block. So to the executives a good script is a script that attracts a star, and they will make their deals and set the full machinery of a big production in motion and schedule the picture’s release dates, even though the script problems have never been worked out and everyone (even the director) secretly knows that the film will be a confused mess, an embarrassment.
It's worth noting that since Kael wrote this in 1980 (when she was describing a fairly new situation), budgets for major releases have far outpaced inflation. As far as I can tell, none of the top ten films of that year cost more than $90 million in 2013 dollars and only two broke $50 million (the major spectacle and proven property Empire Strikes Back and the notoriously bloated Blues Brothers). I suspect the dynamic Kael describes has a lot to do with that change.




Saturday, July 20, 2013

Weekend blogging -- Kael on Directors

From Trash, Art, and the Movies by Pauline Kael
The craftsmanship that Hollywood has always used as a selling point not only doesn’t have much to do with art—the expressive use of techniques—it probably doesn’t have very much to do with actual box-office appeal, either. A dull movie like Sidney Furie’s “The Naked Runner” is technically competent. The appalling “Half a Sixpence” is technically astonishing. Though the large popular audience has generally been respectful of expenditure (so much so that a critic who wasn’t impressed by the money and effort that went into a “Dr. Zhivago” might be sharply reprimanded by readers), people who like “The President’s Analyst” or “The Producers” or “The Odd Couple” don’t seem to be bothered by their technical ineptitude and visual ugliness. And on the other hand, the expensive slick techniques of ornately empty movies like “A Dandy in Aspic” can actually work against one’s enjoyment, because such extravagance and waste are morally ugly. If one compares movies one likes to movies one doesn’t like, craftsmanship of the big-studio variety is hardly a decisive factor. And if one compares a movie one likes by a competent director such as John Sturges or Franklin Schaffner or John Frankenheimer to a movie one doesn’t much like by the same director, his technique is probably not the decisive factor. After directing “The Manchurian Candidate” Frankenheimer directed another political thriller, “Seven Days in May,” which, considered just as a piece of direction, was considerably more confident. While seeing it, one could take pleasure in Frankenheimer’s smooth showmanship. But the material (Rod Serling out of Fletcher Knebel and Charles W. Bailey II) was like a straight (i.e., square) version of “The Manchurian Candidate.” I have to chase around the corridors of memory to summon up images from “Seven Days in May”; despite the brilliant technique, all that is clear to mind is the touchingly, desperately anxious face of Ava Gardner—how when she smiled you couldn’t be sure if you were seeing dimples or tics. But “The Manchurian Candidate,” despite Frankenheimer’s uneven, often barely adequate, staging, is still vivid because of the script. It took off from a political double entendre that everybody had been thinking of (“Why, if Joe McCarthy were working for the Communists, he couldn’t be doing them more good!”) and carried it to startling absurdity, and the extravagances and conceits and conversational non sequiturs (by George Axelrod out of Richard Condon) were ambivalent and funny in a way that was trashy yet liberating.
On a related note, I read The Manchurian Candidate not that long ago and I was struck how faithful the movie was (the original, not the incredibly pointless remake), but also how much more restrained it was. The book was a full bore, pitch black, satiric farce. There is simply no way you could have gotten the sexual content (including an explicit incest subplot and a wartime incident that plays like something conceived by the Farrelly brothers) under even a fading Hays Code. More importantly, in 1962 the red scare was still fresh enough that no major studio film would have had the nerve to take the central joke as far as the book did and leave no doubt about who these murderous, sexually deviant communists agents were supposed to be.




Friday, July 19, 2013

Wages

One issue that is brought up by Mark's recent post but not explicitly discussed is the issue of "living wages".  It is popular to argue that wages are set but fundamental market forces and thus are deserved.  But that view isn't universal and Justin Fox claims it may be seeing some pushback:
That's because it's becoming clear that pay levels aren't entirely set by the market. They are also affected by custom, by the balance of power between workers and employers, and by government regulation. Early economists understood that wage setting was "fundamentally a social decision," Jonathan Schlefer wrote on HBR.org last year, but their 20th century successors became fixated on the idea of a "natural law" that kept pay in line with productivity. And this idea that wages are set by inexorable economic forces came to dominate popular discourse as well.
One of the better pieces of evidence he brings up is the difference between the experiences of the American and German auto-workers:
In 2010, Germany produced more than 5.5 million automobiles; the U.S produced 2.7 million. At the same time, the average auto worker in Germany made $67.14 per hour in salary in benefits; the average one in the U.S. made $33.77 per hour. Yet Germany’s big three car companies—BMW, Daimler (Mercedes-Benz), and Volkswagen—are very profitable.

Now it is hard to build a theory around an anecdote.  But science demands that we look for places that the theory does not fit with the facts, and facts like this are inconvenient.  Not because I can't try and explain why there may be confounding factors, but because maintaining high wages and high market share seems paradoxical in the current world. 

But really it should be making us question the market as a law of nature as opposed to a social construct.  Because once you accept that wage decisions are socially negotiated, it makes issues like inequality of wages much more salient. 


First assume a fairy godmother...

This is one of those stories illustrates just how bad journalists have gotten at covering life in the bottom quartile. Here, from Marketplace, is the set-up:
The fast food chain teamed up with Visa to create an online budget guide for its employees. And most of the criticism is directed at the fact that the company's budget doesn't list 'food' or 'heat' as monthly budget items. 
...
"Helping you succeed financially is one of the many ways McDonald's is creating a satisfying and rewarding work environment," the McDonald's site's about page states. "So you can take the next step towards financial freedom." 
To do that, the guide suggests journaling daily expenses, setting up a budget and outling a savings goal. Sound reasonable? 
One problem: the sample budget offered by McDonald's (below) doesn't mention money for basic necessities like food, heat, gas and clothing. 
The budget also assumes a worker will need to maintain two jobs in order to make roughly $24,500 a year.

Here's the actual document:



A heated debate has broken out over whether it's possible to live on $24,500 a year. This is not a question that would perplex a group pulled at random from the general populace. People do it all the time. I've done it myself (and yes, I'm adjusting for inflation). I even have a musician friend in New York City who's doing it now.

You eat lots of beans and potatoes. You get a prepaid phone. You buy a set of rabbit ears (which, as mentioned before, would actually give you more channels and better picture than the basic cable the WP article suggests). You live day-to-day. You constantly worry about money. You're one one bad break away from disaster but with exception of the health insurance and heating items, nothing in expenses, including rent, is that unreasonable.

There is, in fact, only one completely unrealistic item here:

Second job: $955

Angry Bear, which does get it, explains just how much work we're talking about.
Besides skipping certain expenses and skimping on others; to meet the income levels portrayed in the budget, McDonalds suggests associates to work not one but two jobs. A full time job at McDonalds and a part time job elsewhere totally 62 hours per week (if the worker resides in Illinois where the minimum wage is $8.25/hour). If perchance, the worker resides in one of the other 48 states; the total hours needed to hit the suggested income level jumps to 74 hours/week due to a lower minimum wage (the equivalent of a second full time job). 
And Marketplace explains how unlikely that 74 is:
At the same time, there’s been a sharp drop in the number of people who are holding down multiple jobs, and most of those are likely to be part-time, since there are only so many hours in a day. The number of multiple job-holders is down by more than 500,000 since 2007.  So, there are more people in part-time jobs, but fewer people able to cobble together two or more of those jobs to make ends meet.
...
This trend to more part-time work could be permanent. Employers like the flexibility, and the low cost. Benefits in many part-time jobs -- health care, retirement -- are slim to none.

But there’s a complication. For job-seekers, it’s now harder to find and keep multiple part-time jobs. “Among low-wage employers -- retail, hospitality, food service -- employers are requiring their employees to say they’re available for a full-time schedule, even when they know they’re never going to schedule them for full-time,” says Stephanie Luce at the City University of New York’s Murphy Institute.

Luce is a labor sociologist who studies union movements around the world. She co-authored, with the Retail Action Network, a study based on surveys of retail workers in New York, Discounted Jobs: How Retailers Sell Workers Short. “Managers are asked to schedule based on customer-flow, on weather, on trends in the economy, and to change the schedule day-to-day,” says Luce. “They don’t want employees that are going to say ‘I can’t come in, I have another job.’ They want employees that’ll say, ‘OK, I’ll come in if you need me. I won’t come in if you don’t need me.’”  


Thursday, July 18, 2013

If we just look at climate change, we should do these things. If we take climate change out of the picture we should still do these things

Massoud Amin, chair of the Institute of Electrical and Electronics Engineers Control Systems Society’s Technical Committee on Smart Grids, has a must-read opinion piece up at Nature (unfortunately behind a pay wall though you can get most of the same information from this interview). He lays out a highly persuasive case based on both economic benefits like greatly reducing the number and duration of power outages (which are currently estimated to cost the US economy between US$80 billion and $188 billion each year) and environmental benefits like reducing carbon dioxide emissions by 12–18% by 2030.

Two things in particular struck me as I read this. The first was that Amin could make his argument strictly on economic terms and strictly on environmental ones. There's an obvious parallel here with fixing choke points in our freight rail system, an infrastructure improvement that would pay for itself two or three-fold in areas like transportation costs, highway congestion and wear-and-tear on roads and bridges. Our inability to take action is so entrenched that we can't take significant steps to address climate change even when there's a overwhelming non-environmental case for moving forward.

The second thing that hit me was the cost Amin gives: around $400 billion, or $21 billion to $24 billion a year for 20 years. That is a great deal of money in absolute terms but when you start looking at the various costs associated with climate change-- sea level, ocean acidification, droughts, extreme weather -- you get into the trillions fairly quickly (you might well hit a trillion in Florida alone).

What follows is very back-of-the-envelope, but based on the US share of carbon emissions, it looks like, if you make all sorts of simplifying assumptions, the smart grid impact would be around 2.5% of global totals. In gross terms, ignoring all other economic benefits, smart grids are not a particularly cheap way of reducing carbon emissions (in net terms they actually pay for themselves, but put that aside) but $400 billion for a two and a half percent reduction doesn't seem that high.

The climate change debate is often framed as a choice between saving the planet and saving the economy, but when you look at the proposed solutions, they either don't seem to put that much of a drag on the economy (carbon taxes, moving away from coal) or they actually pay for themselves (infrastructure improvements) while the potential economic damage of climate change dwarfs the combined costs of all of the proposed solutions.


Wednesday, July 17, 2013

Intellectual property and Marvel

(I told you I'd connect Stan Lee to this)

For about the past fifty years the company which is now Marvel entertainment, has made a spectacular amount of money and has done it in virtually every medium from comics to television to film to video games to  novels to even, Heaven help us, the stage.

This is all the more remarkable when you consider that shortly before its period of dominance the company was a third rate imprint that was, by some accounts, on its last legs.

The rise was a remarkable achievement both in American publishing and pop culture. In economic terms alone, it shows how a small company with almost no resources or structural advantages can come to dominate an industry and generate billions of dollars.

One aspect of that story which is particularly relevant given our recent posts on the subject is the way Stan Lee used public domain (and in some cases, not-quite-public domain) intellectual properties as an important part of his business model.

First a quick and hopefully painless bit of comic book history.

Superheroes were the first big original content hit of the medium. Starting with Superman in 1938, they dominated that side of the industry for almost a decade. Licensed titles (like Dell's Disney line) were, by some sources, bigger sellers but if you were creating characters specifically for comics in the early Forties, superheroes were where the money was. By the end of the decade, though, the boom was largely over and other fads such as crime, horror, Western, funny animals, funny teenagers and (with a very unlikely origin) romance took turns as the next big thing.

Of course, comic book publishers kept trying to bring back the genre that had started it all. Lots of companies tried to introduce new superheroes or dust off old ones but without real success. Among others, the company that would become Marvel was particularly badly burned by a superhero push in the mid-Fifties). The big exception here is Magazine Enterprise's Ghost Rider in 1949 but as Don Markstein pointed out, that character blended the faded superhero genre with the up-and-coming genres western and horror.

It was not until 1956 that a team working under DC editor Julius Schwartz came up with a workable formula: take a dormant mid-tier character from the Forties; completely rework the character (sometimes keeping only the name) with a science fiction origin, streamlined jump-suit inspired costumes and a heavy emphasis on space age themes.

In rapid succession and generally with great success,  Schwartz applied this rebooting approach to a number of properties and soon other companies were trying their hand. As early as 1959, Archie Comics (which had been known for a relatively successful and very violent collection of superhero titles in the Forties) had hired Joe Simon and Jack Kirby to rework their character the Shield. As the Sixties got going almost everyone was in on the act.

In 1961, Marvel Comics joined in. Marvel was a small part of Martin Goodman's middling publishing company but it did have a couple of significant assets: a few well-remembered Golden Age characters (the Human Torch, Namor and Captain America) and comics auteur Jack Kirby. Given the market conditions of the time, Kirby's brand was extremely valuable. There was a tremendous demand for all things associated with the previous era of superheroes and Kirby had been a major player with an exceptional level of prominence. In an era when most stories went unsigned, his name was a big enough selling point to be featured prominently on the covers.

Much myth has accumulated around the creation of the Fantastic Four, partially because of the impact the title would go on to have and partially because none of the people involved (Goodman, Lee, Kirby) can be considered reliable narrators, but if you simply look at the book itself and what had been going on in the industry at the time, FF #1 is about what you would expect, combining Schwartz's formula with the group dynamics of Kirby's team comics and elements of the monster comics Marvel had been producing (the two most unusual aspects, the lack of costumes and secret identity were completely dropped when Marvel introduced Spider-man less than a year later).

I don't mean any disrespect for Marvel here. This is usually how companies really work. You find an existing business model, modify it slightly, then use it to establish a revenue stream and a loyal customer base. That's what Lee did. That's what Sam Walton did with Ben Franklin stores. The big ideas and innovation tend to come only after you have a successful stable operation (which was certainly the case with Marvel). That leads to a much bigger point.

We have seen over the past few years a tendency to grant intellectual property protection to ideas that would previously have been considered general parts of a business plan (for example, offering free wi-fi to customers). What if the ability to borrow and recombine elements of business plans in kind of a de facto genetic algorithm is an important part of a creative economy? What if being derivative is the first step for coming up with something original?

There are also some interesting IP questions involving the creation of Spider-man (Wikipedia has a good summary), but that's a discussion for another time. The part of the story that's most relevant comes a couple of years later.

As mentioned previously, from approximately 1956 to 1966, the big thing in comics was to modernize and reboot Golden age characters. This left Marvel with a problem: With the exception of Captain America, the Human Torch and Namor, the company had a very thin bench. You very soon got down to really obscure characters. The whole purpose of the reboot model is to cash in on name recognition so rebooting the virtually forgotten is of limited value. (You have to wonder how many readers in the Sixties had ever heard of the Thirties Tarzan knock-off Ka-Zar.)

Lee's solution was to launch characters using at least the names of three of the biggest sellers of the Golden Age: Daredevil; Ghost Rider; and Captain Marvel, none of which actually belonged to Marvel, but were instead arguably in the public domain. It is the third one that required considerable nerve.

Captain Marvel had been, by some standards, the most successful character to come out of the Golden Age, outselling even Superman (nuisance suits from DC were a big factor in the decision to eventually cancel the series in 1953). What's more, the publisher, Fawcett was big and, though out of the comics business, still very active, publishing title including Family Circle, Woman's Day, Mechanix Illustrated and Gold Medal paperbacks.

Lee was betting (correctly as it turns out) that Fawcett either wouldn't notice or wouldn't bother to sue an obvious copyright infringement. It was a bold but not a reckless move. Attitudes toward copyrights have changed greatly and many of those changes involve the earlier emphasis on active properties and going concerns. Up until recently, the primary reason you acquired and held copyrights was because you wanted to do something with those properties. As a result, if someone went out of the comics business and no one had an immediate interest in their properties, the copyrights were often allowed to lapse or (in the case of Fawcett) go unenforced.

There's a lesson here about creative destruction. Companies, particularly those in the creation business, often start out by borrowing business plans and skirting copyright and patent laws. You can certainly argue that this lowers the value of the intellectual property they are making use of, but I think you can also argue, as or more persuasively, that the returns on tolerating this behavior from small, young companies far outweigh the cost.

For more on the IP beat, click here, here, and here.