Tuesday, July 23, 2013

Personal finance

There is a very good post by a California tenured professor about the challenges of making ends meet in an expensive city.  It's true (and she admits it) that the student loan decisions were not ideal.  Bu it is also remarkable how many of the expenses of her position get shifted on to her.  In the private sector, I can't imagine being told to pay for a required conference yourself. 

Personal blogroll -- Rocco Pendola

I really shouldn't like this guy. For one thing he writes for the Street which puts him zero degrees of separation from Jim Cramer. He also sometimes appears as a talking head on CNBC, another huge black mark. But as I continue doing more research for various media posts, I keep coming across relevant Pendola articles that manage to cut directly to the central questions.

He has that increasingly rare facility of recognizing the obvious when the obvious does not match the official narrative. He questions the rather odd numbers coming out of Netflix. He realizes that any story about the business side of Hulu has got to revolve around the who owns the company. He understands the importance of not putting too much weight on absolute numbers when looking at the Internet and instead take things in context. He makes necessary distinctions between different types of royalty particularly the royalties paid to performers versus the royalties paid to songwriters.

I suspect this independence comes in large part because Pendola has a very different background then most of the people who write financial news. One of the big recurring themes at West Coast Stat Views is just how insular and inbred the journalistic community has become. This is if anything a bigger problem for financial journalists. A majority of the voices you read in Forbes or Business Insider or Bloomberg have basically the same background, were educated at the same very small set of schools, have had similar career tracks, live In the same region (and often in the same neighborhoods), read the same publications, and frequently have a common social circle.

The result is a monoculture and just as having fields upon fields of the same species of corn makes it prone to outbreaks of  blight, having a journalistic community made up of remarkably similar people makes it vulnerable to bad narratives and questionable memes.

Rocco Pendola Is a former radio producer, DJ, and sports talk show host, who went on to get an urban planning degree at San Francisco State and then settled in Southern California. The result of that unusual resume is a certain level of resistance to those journalistic blights. The following rant about the coverage of Pandora, though somewhat overheated, nicely illustrates his outsider perspective.
Already this week we have seen Business Insider publish an eight-month old blog post passing it off as "today's" news. Then there was Greg Sandoval's Pandora hit job over at The Verge where he passed off Tim Westergren dining on a "truffle-infused Kobe beef burger" with investment bankers as somehow germane to the royalty conversation. Earlier in the same article, Sandoval passes off the inability of All Things D's Peter Kafka to conduct real reporting as a pockmark against Pandora. Of course, that's what they teach you in Journalism 101: Tweet the CTO of a public company to get answers to your most vexing questions. 
This is where we are -- next these guys will pick through Westergren's garbage and produce smashed vinyl copies of Pink Floyd's Dark Side of the Moon. They'll mark it as "EXCLUSIVE" or "BREAKING" and take pats on the back from the clique of colleagues and "industry sources" they work so feverishly not to piss off. Forget doing actual work to get the real story or find something closer to the truth; it's not about the reader, it's about the personal relationships they maintain that the general public couldn't care less about. 
We have come to a point where not only in this story, but, sadly, in the broader scope, journalists routinely make something out of nothing and expect companies to spoon feed them information in lieu of doing actual journalism.

Credit where credit is due...

I don't know if I'll have time to give this the attention it deserves, but this long but worthwhile post by Noam Scheiber hits on at least a couple of our big recurring  threads: the ways that businesses and industries get into trouble; and the importance and difficulty of getting incentives properly aligned.

Here's an example:
There was frustration with other aspects of the new compensation system, too. Previously, partners were reluctant to ask colleagues to help on their pitches, because credit was a zero-sum game: If a partner landed the business, she would have to award some of the credit to the colleague, leaving less for herself. Under the new rules, the firm allowed the partner to claim up to 100 percent of the credit herself, then dole out up to 100 percent more among any partners who had helped. 
This encouraged collaboration at times, according to several former partners. The downside was that many began to view the additional 100 percent worth of credit as a slush fund, ladling it out to friends with little role in their cases or transactions. “It led to sleazy deals,” recalls one former partner. “It took about thirty seconds for people to figure it out.” Says a former finance lawyer of two senior partners in his group: “I saw the billing going around. One was getting credit on stuff the second opened, and the second was getting credit for stuff the first one opened.” There seemed to be no way around it: The more Mayer Brown set out to fix its problems, the more deviously its partners behaved.


Monday, July 22, 2013

Urban Sprawl

Mark Thoma's site has a link to Paul Krugman's discussion of the association between sprawl and low social mobility.  It appears that if you do a plot of urban density versus social mobility of the lowest quintile to the highest quintile you get a very surprising linear relation: as density drops it looks like persistent inequality rises.  Paul Krugman is appropriately skeptical that this is the whole story:
Is the relationship causal? You can easily think of reasons for spurious correlation: sprawl is associated with being in the sunbelt, with voting Republican, with having weak social safety net programs, etc.. Still, it’s striking.

Matt Yglesias adds additional data about what happens with kids who move into high density urban areas as well as a few other possible explanations:
So what drives this? The study does not really make a high-powered effort to draw strong causal inferences. But the study does show that kids who moved into a high-mobility area at a young age do about as well as the kids born in high-mobility areas, but kids who move as teenagers don't. So there seems to be a factor that isn't parent-driven. The authors report that tax policy, the existence and affordability of local colleges, and the level of extreme local wealth do not appear to be strong correlates of intergenerational mobility. Metro areas where the poor are geographically isolated from the middle class have less intergenerational mobility, while metro areas with more two-parents households, better elementary and high schools, and more "civic engagement" (measured through membership in religious and community groups) have more.
 So clearly it would be a mistake to over-interpret these data. But they do have one major policy piece embedded into them -- it makes absolutely no sense to subsidize sprawl as a positive good.  It may not be worth it to try and discourage it, but generally there are a lot of laws (think zoning laws and car centered transportation grids) that implicitly subsidize sub-urban communities.

There are still pieces to be considered -- like does the poorest quintile do objectively better or worse in the low social mobility environments (you can justify low mobility if everyone is better off as a result).  However, the two extremes in Paul Krugman's graph are Atlanta (low density and mobility) and Los Angeles (high density and mobility).  It's not 100% clear that it is better to be poor in California than Georgia, but it isn't like it is far worse in California so far as I can tell.  Maybe Mark can weigh in here? 

But this all points to a big picture that urban planning is actually a much bigger deal than I had previously realized. 


Not really movie people...

Signed up for Netflix recently and one of the things I've noticed is that their blurbs often do an extraordinarily bad job at pitching. Compared to the capsules you'll find from Maltin or TV Guide, they have a tendency to leave out the details that would make a person who would enjoy a movie actually go ahead and watch it.

This is a bigger deal than you might think. Consider the Netflix viewing model:

1. I watch a show;

2. Netflix recommends other shows;

3. Based on the information provided and what I know about the show, I decide whether or not to pay the the cost in time required to watch it.

It isimportant to note that this process adds the most value when it gets me to watch and enjoy a show that I was previously unaware of. Here's where a good blurb is vital. Even if the recommendation model works perfectly, the process is a failure when I don't watch the film it recommends.

With that in mind, check out this Netflix description:
Royal Deceit(Prince of Jutland)
1994R1hr 25m
Young Jute Amled avenges his father's murder at the hands of his dissembling uncle, only to be banished to Scotland for the deed. But Amled has other plans, and soon practices a little deception of his own.
Cast: Gabriel Byrne, Helen Mirren, Christian Bale
Genre: Action & Adventure, Dramas, Adventures, Crime Action & Adventure
This movie is: Dark
With the exception of some interesting names in the cast, there's nothing (at least nothing explicit) that would make me want to watch this. That would represent a significant failure for Netflix because the right blurb (from TV Guide) could and did make me watch Royal Deceit on ThisTV a few months ago.

That summary mentioned that the film was directed by Gabriel Axel (1987's BABETTE'S FEAST).

It included Brian Cox, Kate Beckinsale and Tom Wilkinson in the cast list.

And most importantly, it pointed to the aspect of the film most likely to interest the target audience. You may have guessed this part, but if not, here's a hint: in various retellings of the legendary story of this Danish prince,  Amled is often spelled Amleth.

I don't have the TV Guide blurb in front of me but I believe it read a great deal like the first paragraph of their review.
A reworking of Shakespeare's Hamlet, ROYAL DECEIT may lack the Bard's lyrical dialogue but it does boast some sensational action sequences and a truly top-notch cast.
I really do want to see Netflix succeed -- it's a good service for the price and significantly diversifies the media landscape -- but I can't get past the suspicion that the people behind Netflix are bored with the actual business of running the company. They like the part where people call them visionaries and give them a gazillion dollars for their stock, but the low glamour stuff makes their eyes glaze over.

This is one of the reasons I keep going back to Weigel as an example. They obviously love the details (Joe Dale has apparently memorized thousands of TV episodes). When Steve Jobs famously got upset because a headphone jack on an IPod didn't have a satisfying click, he was illustrating this same mentality. You saw similar attitudes in Sam Walton and Don Tyson.

I'm not suggesting that we should judge management based on one anecdote, but the more I look at the company the more it looks like they don't care about details like blurbs and audience demographics. and as someone who wants to see Netflix make it, that wories me.

Sunday, July 21, 2013

Safe failures from "Why Are Movies So Bad? Or, The Numbers"

I'd been meaning to spread out the Pauline Kael posts, but I realized that this 1980 essay about the relationship between corporate culture, misaligned incentives and cinema has an extraordinary number of salient points for some of our ongoing threads.

For example, Kael explains how, under a fairly common set of circumstances, it can be in an executive's best interests to opt for a project with higher costs and lower potential returns.

Why Are Movies So Bad? Or, The Numbers
There is an even grimmer side to all this: because the studios have discovered how to take the risk out of moviemaking, they don’t want to make any movies that they can’t protect themselves on. Production and advertising costs have gone so high that there is genuine nervous panic about risky projects. If an executive finances what looks like a perfectly safe, stale piece of material and packs it with stars, and the production costs skyrocket way beyond the guarantees, and the pictures loses many millions, he won’t be blamed for it—he was playing the game by the same rules as everybody else. If, however, he takes a gamble on a small project that can’t be sold in advance—something that a gifted director really wants to do, with a subtle, not easily summarized theme and no big names in the cast—and it loses just a little money, his neck is on the block. So to the executives a good script is a script that attracts a star, and they will make their deals and set the full machinery of a big production in motion and schedule the picture’s release dates, even though the script problems have never been worked out and everyone (even the director) secretly knows that the film will be a confused mess, an embarrassment.
It's worth noting that since Kael wrote this in 1980 (when she was describing a fairly new situation), budgets for major releases have far outpaced inflation. As far as I can tell, none of the top ten films of that year cost more than $90 million in 2013 dollars and only two broke $50 million (the major spectacle and proven property Empire Strikes Back and the notoriously bloated Blues Brothers). I suspect the dynamic Kael describes has a lot to do with that change.




Saturday, July 20, 2013

Weekend blogging -- Kael on Directors

From Trash, Art, and the Movies by Pauline Kael
The craftsmanship that Hollywood has always used as a selling point not only doesn’t have much to do with art—the expressive use of techniques—it probably doesn’t have very much to do with actual box-office appeal, either. A dull movie like Sidney Furie’s “The Naked Runner” is technically competent. The appalling “Half a Sixpence” is technically astonishing. Though the large popular audience has generally been respectful of expenditure (so much so that a critic who wasn’t impressed by the money and effort that went into a “Dr. Zhivago” might be sharply reprimanded by readers), people who like “The President’s Analyst” or “The Producers” or “The Odd Couple” don’t seem to be bothered by their technical ineptitude and visual ugliness. And on the other hand, the expensive slick techniques of ornately empty movies like “A Dandy in Aspic” can actually work against one’s enjoyment, because such extravagance and waste are morally ugly. If one compares movies one likes to movies one doesn’t like, craftsmanship of the big-studio variety is hardly a decisive factor. And if one compares a movie one likes by a competent director such as John Sturges or Franklin Schaffner or John Frankenheimer to a movie one doesn’t much like by the same director, his technique is probably not the decisive factor. After directing “The Manchurian Candidate” Frankenheimer directed another political thriller, “Seven Days in May,” which, considered just as a piece of direction, was considerably more confident. While seeing it, one could take pleasure in Frankenheimer’s smooth showmanship. But the material (Rod Serling out of Fletcher Knebel and Charles W. Bailey II) was like a straight (i.e., square) version of “The Manchurian Candidate.” I have to chase around the corridors of memory to summon up images from “Seven Days in May”; despite the brilliant technique, all that is clear to mind is the touchingly, desperately anxious face of Ava Gardner—how when she smiled you couldn’t be sure if you were seeing dimples or tics. But “The Manchurian Candidate,” despite Frankenheimer’s uneven, often barely adequate, staging, is still vivid because of the script. It took off from a political double entendre that everybody had been thinking of (“Why, if Joe McCarthy were working for the Communists, he couldn’t be doing them more good!”) and carried it to startling absurdity, and the extravagances and conceits and conversational non sequiturs (by George Axelrod out of Richard Condon) were ambivalent and funny in a way that was trashy yet liberating.
On a related note, I read The Manchurian Candidate not that long ago and I was struck how faithful the movie was (the original, not the incredibly pointless remake), but also how much more restrained it was. The book was a full bore, pitch black, satiric farce. There is simply no way you could have gotten the sexual content (including an explicit incest subplot and a wartime incident that plays like something conceived by the Farrelly brothers) under even a fading Hays Code. More importantly, in 1962 the red scare was still fresh enough that no major studio film would have had the nerve to take the central joke as far as the book did and leave no doubt about who these murderous, sexually deviant communists agents were supposed to be.




Friday, July 19, 2013

Wages

One issue that is brought up by Mark's recent post but not explicitly discussed is the issue of "living wages".  It is popular to argue that wages are set but fundamental market forces and thus are deserved.  But that view isn't universal and Justin Fox claims it may be seeing some pushback:
That's because it's becoming clear that pay levels aren't entirely set by the market. They are also affected by custom, by the balance of power between workers and employers, and by government regulation. Early economists understood that wage setting was "fundamentally a social decision," Jonathan Schlefer wrote on HBR.org last year, but their 20th century successors became fixated on the idea of a "natural law" that kept pay in line with productivity. And this idea that wages are set by inexorable economic forces came to dominate popular discourse as well.
One of the better pieces of evidence he brings up is the difference between the experiences of the American and German auto-workers:
In 2010, Germany produced more than 5.5 million automobiles; the U.S produced 2.7 million. At the same time, the average auto worker in Germany made $67.14 per hour in salary in benefits; the average one in the U.S. made $33.77 per hour. Yet Germany’s big three car companies—BMW, Daimler (Mercedes-Benz), and Volkswagen—are very profitable.

Now it is hard to build a theory around an anecdote.  But science demands that we look for places that the theory does not fit with the facts, and facts like this are inconvenient.  Not because I can't try and explain why there may be confounding factors, but because maintaining high wages and high market share seems paradoxical in the current world. 

But really it should be making us question the market as a law of nature as opposed to a social construct.  Because once you accept that wage decisions are socially negotiated, it makes issues like inequality of wages much more salient. 


First assume a fairy godmother...

This is one of those stories illustrates just how bad journalists have gotten at covering life in the bottom quartile. Here, from Marketplace, is the set-up:
The fast food chain teamed up with Visa to create an online budget guide for its employees. And most of the criticism is directed at the fact that the company's budget doesn't list 'food' or 'heat' as monthly budget items. 
...
"Helping you succeed financially is one of the many ways McDonald's is creating a satisfying and rewarding work environment," the McDonald's site's about page states. "So you can take the next step towards financial freedom." 
To do that, the guide suggests journaling daily expenses, setting up a budget and outling a savings goal. Sound reasonable? 
One problem: the sample budget offered by McDonald's (below) doesn't mention money for basic necessities like food, heat, gas and clothing. 
The budget also assumes a worker will need to maintain two jobs in order to make roughly $24,500 a year.

Here's the actual document:



A heated debate has broken out over whether it's possible to live on $24,500 a year. This is not a question that would perplex a group pulled at random from the general populace. People do it all the time. I've done it myself (and yes, I'm adjusting for inflation). I even have a musician friend in New York City who's doing it now.

You eat lots of beans and potatoes. You get a prepaid phone. You buy a set of rabbit ears (which, as mentioned before, would actually give you more channels and better picture than the basic cable the WP article suggests). You live day-to-day. You constantly worry about money. You're one one bad break away from disaster but with exception of the health insurance and heating items, nothing in expenses, including rent, is that unreasonable.

There is, in fact, only one completely unrealistic item here:

Second job: $955

Angry Bear, which does get it, explains just how much work we're talking about.
Besides skipping certain expenses and skimping on others; to meet the income levels portrayed in the budget, McDonalds suggests associates to work not one but two jobs. A full time job at McDonalds and a part time job elsewhere totally 62 hours per week (if the worker resides in Illinois where the minimum wage is $8.25/hour). If perchance, the worker resides in one of the other 48 states; the total hours needed to hit the suggested income level jumps to 74 hours/week due to a lower minimum wage (the equivalent of a second full time job). 
And Marketplace explains how unlikely that 74 is:
At the same time, there’s been a sharp drop in the number of people who are holding down multiple jobs, and most of those are likely to be part-time, since there are only so many hours in a day. The number of multiple job-holders is down by more than 500,000 since 2007.  So, there are more people in part-time jobs, but fewer people able to cobble together two or more of those jobs to make ends meet.
...
This trend to more part-time work could be permanent. Employers like the flexibility, and the low cost. Benefits in many part-time jobs -- health care, retirement -- are slim to none.

But there’s a complication. For job-seekers, it’s now harder to find and keep multiple part-time jobs. “Among low-wage employers -- retail, hospitality, food service -- employers are requiring their employees to say they’re available for a full-time schedule, even when they know they’re never going to schedule them for full-time,” says Stephanie Luce at the City University of New York’s Murphy Institute.

Luce is a labor sociologist who studies union movements around the world. She co-authored, with the Retail Action Network, a study based on surveys of retail workers in New York, Discounted Jobs: How Retailers Sell Workers Short. “Managers are asked to schedule based on customer-flow, on weather, on trends in the economy, and to change the schedule day-to-day,” says Luce. “They don’t want employees that are going to say ‘I can’t come in, I have another job.’ They want employees that’ll say, ‘OK, I’ll come in if you need me. I won’t come in if you don’t need me.’”  


Thursday, July 18, 2013

If we just look at climate change, we should do these things. If we take climate change out of the picture we should still do these things

Massoud Amin, chair of the Institute of Electrical and Electronics Engineers Control Systems Society’s Technical Committee on Smart Grids, has a must-read opinion piece up at Nature (unfortunately behind a pay wall though you can get most of the same information from this interview). He lays out a highly persuasive case based on both economic benefits like greatly reducing the number and duration of power outages (which are currently estimated to cost the US economy between US$80 billion and $188 billion each year) and environmental benefits like reducing carbon dioxide emissions by 12–18% by 2030.

Two things in particular struck me as I read this. The first was that Amin could make his argument strictly on economic terms and strictly on environmental ones. There's an obvious parallel here with fixing choke points in our freight rail system, an infrastructure improvement that would pay for itself two or three-fold in areas like transportation costs, highway congestion and wear-and-tear on roads and bridges. Our inability to take action is so entrenched that we can't take significant steps to address climate change even when there's a overwhelming non-environmental case for moving forward.

The second thing that hit me was the cost Amin gives: around $400 billion, or $21 billion to $24 billion a year for 20 years. That is a great deal of money in absolute terms but when you start looking at the various costs associated with climate change-- sea level, ocean acidification, droughts, extreme weather -- you get into the trillions fairly quickly (you might well hit a trillion in Florida alone).

What follows is very back-of-the-envelope, but based on the US share of carbon emissions, it looks like, if you make all sorts of simplifying assumptions, the smart grid impact would be around 2.5% of global totals. In gross terms, ignoring all other economic benefits, smart grids are not a particularly cheap way of reducing carbon emissions (in net terms they actually pay for themselves, but put that aside) but $400 billion for a two and a half percent reduction doesn't seem that high.

The climate change debate is often framed as a choice between saving the planet and saving the economy, but when you look at the proposed solutions, they either don't seem to put that much of a drag on the economy (carbon taxes, moving away from coal) or they actually pay for themselves (infrastructure improvements) while the potential economic damage of climate change dwarfs the combined costs of all of the proposed solutions.


Wednesday, July 17, 2013

Intellectual property and Marvel

(I told you I'd connect Stan Lee to this)

For about the past fifty years the company which is now Marvel entertainment, has made a spectacular amount of money and has done it in virtually every medium from comics to television to film to video games to  novels to even, Heaven help us, the stage.

This is all the more remarkable when you consider that shortly before its period of dominance the company was a third rate imprint that was, by some accounts, on its last legs.

The rise was a remarkable achievement both in American publishing and pop culture. In economic terms alone, it shows how a small company with almost no resources or structural advantages can come to dominate an industry and generate billions of dollars.

One aspect of that story which is particularly relevant given our recent posts on the subject is the way Stan Lee used public domain (and in some cases, not-quite-public domain) intellectual properties as an important part of his business model.

First a quick and hopefully painless bit of comic book history.

Superheroes were the first big original content hit of the medium. Starting with Superman in 1938, they dominated that side of the industry for almost a decade. Licensed titles (like Dell's Disney line) were, by some sources, bigger sellers but if you were creating characters specifically for comics in the early Forties, superheroes were where the money was. By the end of the decade, though, the boom was largely over and other fads such as crime, horror, Western, funny animals, funny teenagers and (with a very unlikely origin) romance took turns as the next big thing.

Of course, comic book publishers kept trying to bring back the genre that had started it all. Lots of companies tried to introduce new superheroes or dust off old ones but without real success. Among others, the company that would become Marvel was particularly badly burned by a superhero push in the mid-Fifties). The big exception here is Magazine Enterprise's Ghost Rider in 1949 but as Don Markstein pointed out, that character blended the faded superhero genre with the up-and-coming genres western and horror.

It was not until 1956 that a team working under DC editor Julius Schwartz came up with a workable formula: take a dormant mid-tier character from the Forties; completely rework the character (sometimes keeping only the name) with a science fiction origin, streamlined jump-suit inspired costumes and a heavy emphasis on space age themes.

In rapid succession and generally with great success,  Schwartz applied this rebooting approach to a number of properties and soon other companies were trying their hand. As early as 1959, Archie Comics (which had been known for a relatively successful and very violent collection of superhero titles in the Forties) had hired Joe Simon and Jack Kirby to rework their character the Shield. As the Sixties got going almost everyone was in on the act.

In 1961, Marvel Comics joined in. Marvel was a small part of Martin Goodman's middling publishing company but it did have a couple of significant assets: a few well-remembered Golden Age characters (the Human Torch, Namor and Captain America) and comics auteur Jack Kirby. Given the market conditions of the time, Kirby's brand was extremely valuable. There was a tremendous demand for all things associated with the previous era of superheroes and Kirby had been a major player with an exceptional level of prominence. In an era when most stories went unsigned, his name was a big enough selling point to be featured prominently on the covers.

Much myth has accumulated around the creation of the Fantastic Four, partially because of the impact the title would go on to have and partially because none of the people involved (Goodman, Lee, Kirby) can be considered reliable narrators, but if you simply look at the book itself and what had been going on in the industry at the time, FF #1 is about what you would expect, combining Schwartz's formula with the group dynamics of Kirby's team comics and elements of the monster comics Marvel had been producing (the two most unusual aspects, the lack of costumes and secret identity were completely dropped when Marvel introduced Spider-man less than a year later).

I don't mean any disrespect for Marvel here. This is usually how companies really work. You find an existing business model, modify it slightly, then use it to establish a revenue stream and a loyal customer base. That's what Lee did. That's what Sam Walton did with Ben Franklin stores. The big ideas and innovation tend to come only after you have a successful stable operation (which was certainly the case with Marvel). That leads to a much bigger point.

We have seen over the past few years a tendency to grant intellectual property protection to ideas that would previously have been considered general parts of a business plan (for example, offering free wi-fi to customers). What if the ability to borrow and recombine elements of business plans in kind of a de facto genetic algorithm is an important part of a creative economy? What if being derivative is the first step for coming up with something original?

There are also some interesting IP questions involving the creation of Spider-man (Wikipedia has a good summary), but that's a discussion for another time. The part of the story that's most relevant comes a couple of years later.

As mentioned previously, from approximately 1956 to 1966, the big thing in comics was to modernize and reboot Golden age characters. This left Marvel with a problem: With the exception of Captain America, the Human Torch and Namor, the company had a very thin bench. You very soon got down to really obscure characters. The whole purpose of the reboot model is to cash in on name recognition so rebooting the virtually forgotten is of limited value. (You have to wonder how many readers in the Sixties had ever heard of the Thirties Tarzan knock-off Ka-Zar.)

Lee's solution was to launch characters using at least the names of three of the biggest sellers of the Golden Age: Daredevil; Ghost Rider; and Captain Marvel, none of which actually belonged to Marvel, but were instead arguably in the public domain. It is the third one that required considerable nerve.

Captain Marvel had been, by some standards, the most successful character to come out of the Golden Age, outselling even Superman (nuisance suits from DC were a big factor in the decision to eventually cancel the series in 1953). What's more, the publisher, Fawcett was big and, though out of the comics business, still very active, publishing title including Family Circle, Woman's Day, Mechanix Illustrated and Gold Medal paperbacks.

Lee was betting (correctly as it turns out) that Fawcett either wouldn't notice or wouldn't bother to sue an obvious copyright infringement. It was a bold but not a reckless move. Attitudes toward copyrights have changed greatly and many of those changes involve the earlier emphasis on active properties and going concerns. Up until recently, the primary reason you acquired and held copyrights was because you wanted to do something with those properties. As a result, if someone went out of the comics business and no one had an immediate interest in their properties, the copyrights were often allowed to lapse or (in the case of Fawcett) go unenforced.

There's a lesson here about creative destruction. Companies, particularly those in the creation business, often start out by borrowing business plans and skirting copyright and patent laws. You can certainly argue that this lowers the value of the intellectual property they are making use of, but I think you can also argue, as or more persuasively, that the returns on tolerating this behavior from small, young companies far outweigh the cost.

For more on the IP beat, click here, here, and here.

Tuesday, July 16, 2013

Krugman is on a roll

Krugman begins to build the case against a direct application of the theories proposed in Ayn Rand's magnum Opus Atlas Shrugged here:
Of course, that’s not how we do things. We may live in a market sea, but that sea is dotted with many islands that we call firms, some of them quite large, within which decisions are made not via markets but via hierarchy — even, you might say, via central planning. Clearly, there are some things you don’t want to leave up to the market — the market itself is telling us that, by creating those islands of planning and hierarchy.


and here:
The thing is, however, that for a free-market true believer the recognition that some things are best not left up to markets should be a disturbing notion. If the limitations of markets in providing certain kinds of shared services are important enough to justify the creation of command-and-control entities with hundreds of thousands or even millions of workers, might there not even be some goods and services (*cough* health care *cough*) best provided by non-market means even at the level of the economy as a whole?
 In a lot of ways, the corporation is a huge challenge for theories of free markets.  They are large organizations that have a great deal of political power and are famous for being able to survive sub-optimal decisions (see Dilbert, especially the Wally character).  This use of political power can result in firms being successful due to favorable regulation (consider agricultural subsidies). 

This is quite different than the nation of shopkeepers envisioned by Adam Smith.  Without all business being set up in small units, the net result is that new entrants to the market can be strangled by existing firms (if nothing else, it is expensive to fend off spurious lawsuits).  The Sears experiment that Krugman references is another good piece of evidence that there can be returns to cooperation as well as competition.  Heck, a student of history who has read Caesar's account of his war against the Gauls should be highly suspicious that cooperation can be extremely important in the success or failure of groups.   

Relative versus absolute changes

Paul Krugman:
One implication of all this is what Gauti Eggerstsson and I (pdf) call the paradox of flexibility: making it easier for wages to fall, as Hazlitt demanded then and his modern acolytes demand now, doesn’t just redistribute income away from workers to the wealthy (funny how that happens); it actually worsens the economy as a whole.
I think this is a very good example of cases where relative advantage and absolute advantage are confused.  Any one firm can become more competitive by reducing costs.  But if everyone reduces costs then there is no advantage.  Things only get interesting if railroad workers and teachers are out of alignment on wages and you can change the wages of one relative to the others. 

But there isn't a clear reason that this has to be due to dropping wages.  The difference in wage increases over time can address this misalignment without causing massive hardship in a world where obligations are in nominal dollars. 

So I think Paul Krugman is right to be skeptical about deflation as a solution.  


Monday, July 15, 2013

Believe or not, I'm going to tie this in with the Fantastic Four





Joseph Stiglitz has a NYT opinion piece entitled "How Intellectual Property Reinforces Inequality" which lays out in one of the most serious areas imaginable just how costly out of control IP laws can be (a point we've been making in much more frivolous terms).
The drug industry, as always, claimed that without patent protection, there would be no incentives for research and all would suffer. I filed an amicus brief with the court, explaining why the industry’s arguments were wrong, and why this and similar patents actually impeded, rather than fostered, innovation. Through my participation in the case, I heard heart-rending stories of women who didn’t take the actions they should have, because they believed the false-negative results of Myriad’s inferior tests. The better tests would have told them they did in fact have a gene associated with cancer.The good news coming from the Supreme Court was that in the United States, genes could not be patented. In a sense, the court gave back to women something they thought they already owned. This had two enormous practical implications: one is it meant that there could now be competition to develop better, more accurate, less expensive tests for the gene. We could once again have competitive markets driving innovation. And the second is that poor women would have a more equal chance to live — in this case, to conquer breast cancer.

But as important a victory as this is, it is ultimately only one corner of a global intellectual property landscape that is heavily shaped by corporate interests — usually American. And America has attempted to foist its intellectual property regime on others, through the World Trade Organization and bilateral and other multilateral trade regimes. It is doing so now in negotiations as part of the so-called trans-Pacific Partnership. Trade agreements are supposed to be an important instrument of diplomacy: closer trade integration brings closer ties in other dimensions. But attempts by the office of the United States Trade Representative to persuade others that, in effect, corporate profits are more important than human lives undermines America’s international standing: if anything, it reinforces the stereotype of the crass American.

Economic power often speaks louder, though, than moral values; and in the many instances in which American corporate interests prevail in intellectual property rights, our policies help increase inequality abroad. In most countries, it’s much the same as in the United States: the lives of the poor are sacrificed at the altar of corporate profits. But even in those where, say, the government would provide a test like Myriad’s at affordable prices for all, there is a cost: when a government pays monopoly prices for a medical test, it takes money away that could be spent for other lifesaving health expenditures.

Call me suspicious...

But I've gotten to the point where I look for signs of manipulation in all business news and brokers' recommendations. Case in point, Disney had a bad week recently. As you've probably heard, the Lone Ranger reboot is on track to lose a lot of money (the figure $100 million keeps being tossed around), but that doesn't cover the full drop in expected value. Disney was shooting for another Pirates franchise (complete with the same writers, director, producer and star). The first four installments of that series have done almost four billion in box office and the fifth and sixth chapters are in the works. And that box office total doesn't include toys and tee shirts and all of the other ways Disney could make money off something like this. Investors who had priced in the possibility of this being another Pirates will need to recalibrate.

Disney is a huge company, but even there the old saying applies -- "a billion here... a billion there... pretty soon you're talking about real money." Not enough to threaten the company but worth taking into account when thinking about stock price. Fortunately for Disney, this terrible news was balanced out by quite a bit of good (enough to bump the price up a bit). Credit Suisse analyst Michael Senno estimated a global take of $1.2 billion for Star Wars Episode VII and Motley Fool* ran a string of positive stories arguing that Disney was adding value to Marvel and that "Buena Vista Pictures is earning as much as ever." That second claim was supported with a year over year comparison:
Disney won't be as fortunate with The Lone Ranger, which is why so many are comparing this flop-in-the-making to John Carter, last year's $250 million box office bomb that effectively ended the Disney career of former studio chief Rich Ross. 
If only he knew then what we know now. John Carter, for as big a disaster as it was, did nothing to diminish the House of Mouse's theatrical prowess. Here's a closer look at the year-over-year numbers from Jan. 1 through June 30: 
Buena Vista Year-Over-Year Comparison
                                    YTD 2013          YTD 2012          Change  
Number of films            10                       12**                   (2)
Total U.S. box office    $886.8 million      $949.8 million     (6.6%)
Per-film average           $88.7 million        $79.2 million       11.9%
Source: Box Office Mojo.
** Includes a 3D rerelease of The Lion King. 
After achieving $800 million in domestic box office receipts only once since 2000 (in 2010), Disney has done at least that in both 2012 and this year. Impressive may be too timid a word for how well Buena Vista is doing right now.
Notice anything missing? How about budgets, marketing costs, performance of comparable films from other studios? Keep in mind that in absolute numbers, John Carter did pretty well:
John Carter earned $73,078,100 in North America and $209,700,000 in other countries, for a worldwide total as of June 28, 2012 of $282,778,100.
In relative terms, not so much:
Paul Dergarabedian, president of Hollywood.com noted, "John Carter’s bloated budget would have required it to generate worldwide tickets sales of more than $600 million to break even...a height reached by only 63 films in the history of moviemaking"
(According to the New York Times, the Lone Ranger would have to hit $800 million to break even.)

How about Disney adding value to Marvel? The only real example I saw in the article was the willingness to cough up extra money to hold on to talent like Whedon and Downey. Probably a good investment but old news and a case of maintaining, not adding, value.

And that incredible prediction for the Star Wars reboot? Not credible about covers it.

Perhaps I'm too cynical, but given the low quality but excellent timing of these analyses, I have to believe that some folks at Disney have really been working the phones.

* I'm going by the Motley Fool posts and not the videos that accompany them. If anyone out there wants to take one for the team and watch them, let me know if anything of value is said.

Thanks