Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Tuesday, December 27, 2011
You have to wonder what they're thinking
If you're following Burn Notice online
I think we also need to page Mark Palko
In other words, anything you’d recognize as a smartphone seems to be covered.
Matthew Yglesias asks the smart question:
The issue is that there's just no sound public interest case for granting monopolies over certain features to the first-to-market firms in this industry. Apple has already gained a very large competitive advantage from the fact that they were the first people to deploy a working touchscreen smartphone and even without patents clearly has a strong financial need to continue investing in improving its product lest lower-margin Android-powered phones eat away at its profits.
But the general trend seems worrisome. Not only does it vastly increase business complexity (searching the patent office for thousands of potentially applicable patents), but it stifles innovation by making new entry into the smartphone field more difficult.
Mark?
Monday, December 26, 2011
Does strong government help?
Why did primitive cultures lose to organized cultures? It is pretty clear, for example, that the Gauls had a much less intrusive government than the Romans. They had a lot of brave fighters and a fairly free society (as ancient world societies went). While certainly a bit romanticized in literature, it is clear from reading Julius Caesar's accounts of his wars that strong central government was notably absent from the Gauls.
So why were they overrun by a high-tax society with a strong central government?
It is equally interesting to ask questions like why the residents of the American West sought statehood. After all, did they not have a much freer society outside of the United States?
Or why do people seem to suffer so badly in failed states, which also have a lack of strong central government?
I think that these practical concerns need to be addressed. A lot of what the state does is either protective (banning force and fraud) of individual citizens, a vehicle to allow disputes to be settled (the courts), acting as an insurance company for risks that are hard to use markets for, providing public goods, and mutual defense. Perhaps the goal of government should be effective government and not minimalist government.
I wonder if libertarianism thrives because most of us have not seen a failed state in the first world? People have a lot of mobility (if they are rich) and so there is less of a sense of how the government helps make a society function.
Introducing the Ddulites
Ddulite (from Luddite):
A preference for higher tech solutions even in cases where lower tech alternatives have greater and more appropriate functionality; a person of ddulite tendencies.
Though Ddulites are the opposite of Luddites with respect to attitudes toward technology, they occupy more or less the same point with respect to functionality.
Sunday, December 25, 2011
Is there an economist is the house? Casey Mulligan edition
Two posts back, I explained why the "Great Vacation" idea doesn't pass the smell test. If U.S. unemployment had been caused by a negative shock to labor supply, we should have expected to see an increase in real wages.Casey Mulligan, one of the leading proponents of the Great Vacation story, responded on his blog:A number of bloggers have recently discovered real wages as a labor market indicator. They are at least 3 years late to the party.
Three years ago I blogged about the effect of labor supply on real wages;
I noted how real wages had risen since 2007, and predicted that they would begin to decline in 2010.
I have continued to update this work, eg here, and here. The fact is that the real wage time series fits my recession narrative very well.*
The second type of explanation is reduced labor supply.This way outside my field, so I could easily be missing the obvious here, but this post doesn't seem to support the claim that Mulligan was blogging on this question three years ago. That's not to say Mulligan's argument isn't valid or that it doesn't somehow imply his point about wages but if you're going to say "Three years ago I blogged about the effect of labor supply on real wages," you should probably mention wages in more than a passing way.
Suppose, just for the moment, that people were less willing to work, with no change in the demand for their services. This means that employees would have to be more productive because they have to get by with fewer workers.
Of course, people have not suddenly become lazy, but the experiment gives similar results to the actual situation in which some employees face financial incentives that encourage them not to work and some employers face financial incentives not to create jobs.
Professor Douglas gave us a formula for determining how much output per work hour would increase as a result of a reduction in the aggregate supply of hours: For every percentage point that the labor supply declines, productivity would rise by 0.3 percentage points.
As mentioned earlier, in late 2008, labor hours were 4.7 percent below where trends from previous years would predict the number to be. According to Professor Douglas’s theory, this means productivity should rise 1.4 percent above its previous trend by the fourth quarter.
So let’s take a look at the numbers. Unlike in the severe recessions of the 1930s and early 1980s, productivity has been rising. Through the third quarter of 2008, productivity had risen six consecutive quarters, with an increase of 1.9 percent over the past three, or 0.7 percent above the trend for the previous 12 quarters.
Because productivity has been rising — almost as much as the Douglas formula predicts — the decreased employment is explained more by reductions in the supply of labor (the willingness of people to work) and less by the demand for labor (the number of workers that employers need to hire).
I can be sympathetic. Since I started blogging I've often recalled some prophetic observation I made in the past and started typing up a boastful post only to discover on review that I actually hadn't been that prophetic after all.
That's why I always reread old posts before I link to them; if I don't, someone else will.
* I had to remove a couple of reams of html formatting here. If I inadvertently removed something else. let me know.
Saturday, December 24, 2011
Layoffs
And part of it is that the corporation demands continued growth and profit. It can defer some of its growth for long-term development, or keep on an unsuccessful project that someone really likes, but really it boils down to guaranteed growth. And if you attain that growth, then they need to increase that rate of growth. And lord help you if you have a very good year - that very good year becomes the baseline for further calculations. In short, it is a vicious cycle.So they pass out the budgets for next year and now the departments have to plan. Yeah, some of that planning involves going back and telling the guys with the budgets that this makes no sense and sometimes that works. More often it involves figuring out what goes overboard in order to jack up profitability.Sometimes it is a new process that saves times or lowers cost of materials. Sometimes it is a new market that has been opened. Sometimes it is that "big hit" that suddenly arrives and surprises everyone (businesses actually don't like the "big hit" - it really screws up their planning. If they say you are going to lose 3 million this year and you instead MAKE 3 million, you make them look like idiots, and you will be punished accordingly).There are two chilling comments here. One, is the idea that a good year can become the baseline for future expectations. So it makes sense to ensure that you don;t have any unexpected surprises. Two, which is the icing on the cake, is the idea that an unexpected burst of profitability will actually be punished. How this can actually be an efficient system is astonishing.
The standard explanation of "creative destruction" (badly run firms fade to be replaced with better run firms) presumes that we actually let large corporations fail. But there has been a general reluctance to do this in fields ranging from automobile manufacture to banking. Without that safety valve, this approach is going to be very dangerous to efficiency. In a later post, Jeff Grubb notes the CEO compensation for Hasbro (the subject of the first post) is pretty decent:
For those not linking, it is an announcement that the CEO of Hasbro is getting paid $23 Million this year. And yeah, it is like pouring oil on troubled water, then tossing in a match.Now, doing the digging in the article, the CEO gets a raise in salary from $1 Mill to $1.2 Mill (hardly chump change), and the rest being common stock. And to the best of my knowledge (the Internet will correct, of course), this means that it comes out of the company till - they are reassigning stock held by the company to the individual. And this assignment may have other strings attached - the stock cannot be sold except back to the company, it may only be sold at a particular price, it must be sold on leaving the company. So it is a fuzzy number, but a very large fuzzy number.The article also makes clear that this is a retention payment, negotiated last year, to keep the CEO around. It also notes that Hasbro had a weak 2010 in sales (stock prices went up, though). 2011 is nothing to write home about (stock prices have since deflated) and 2012 is not shaping up to be any better (Upcoming big movie: Battleship). So this is not about performance, but rather about stability. This is payment for showing up.What is fascinating about part two is that this is the same company that just laid off two popular and productive long term employees in the Wizard's of the Coast division. So I googled game developer salaries and found this:
Game designers who work for a big company such as Hasbro or White Wolf Publishing can expect a more reliable salary, usually averaging between $30,000 and $50,000 per year.Now let us presume fringe and overhead double the salary, and that both of the senior developers were at the very top of this range. The retention bonus portion of the CEO retention payment was enough to pay 400 developer-years of salary )both have fringe and overhead, it's unclear how this would work out in the details but this is a good starting estimate). Seriously, keeping the CEO around for another year was worth hundreds of experienced employees. What is ironic, is the base salary of the CEO is that of twenty senior developers (raised to that of twenty-four this year). That is actually a credible ratio of the benefit of a good CEO for a company (they have about as much influence as a couple of seasoned design teams). The additional $22 million is hard to understand. No wonder companies don't like comparisons between executive compensation and line worker compensation.
I do not really have a good idea about how to handle this issue in a more global sense, but I am deeply worried that this pattern could be playing out in corporations that we simply are unwilling to let go under.
That is a scary thought.
Do copyright extensions drive innovation? -- Hollywood blockbuster edition
After all this time on patents, I thought we'd give copyrights a turn.
One of the standard arguments for stronger intellectual property laws is that they encourage innovation. Now let's think about how this is supposed to work. Stronger protection for intellectual property makes those properties more valuable. Greater value causes the market to generate more and better properties, particularly those specific properties that best capitalize on the new profit potential.
In the case of copyright protection, the properties that make the best use of these extensions are franchisable stories and characters. I'm specifically using franchise in the sense of selling the right to use a business model. Just as McDonald's can sell one person the right to run a restaurant in one neighborhood and then sell a different person the right to run one in a different neighborhood, the company that owns the rights to, say, Batman can allow one creative team to produce a series of properties based on the character, then turn around a few years later and allow another team a shot.
This interchangeability of talent is essential given the lengths of time we're talking about here. For most of the Twentieth Century, copyright protection was effectively capped at fifty-six years, but major extensions were passed in 1976 and 1998 which extended protection of corporate works up to ninety-five years and left the possibility open of essentially unlimited future extensions.
In order to reach their full potential, franchises have to repeatedly replace all of their creative personnel. Bond and Batman are arguably the good examples, both having gone through numerous incarnations with completely different creative teams, but there's an important difference in the business model. Bond was an ongoing series with considerable continuity both in front of and behind the camera; Batman pattern since the Sixties has been successful run, fallow period, relaunch with new team. The second model, with its long cycles, takes better advantage of the long copyrights.
We would expect 1976 and 1998 to produce major upticks in the creation of properties that could support Batman style franchises because at those points the profit potential of that type of property greatly increased. We would also expect newer properties generally to be more valuable than older properties both because of freshness and because of changing tastes.
That means by now if we look at films that are either part of a franchise or an attempt to launch or relaunch one, we should expect to see a very large share from the past decade (both because of the 1998 Act and because of recency) then a decent showing from the the Eighties and Nineties and little if anything from before the mid-seventies. With that in mind, let's look at the medium to large budget franchisable movies from 2011 and their creation decade:
The Adventures of Tintin -- Twenties
Alvin and the Chipmunks: Chipwrecked -- Fifties
The Twilight Saga: Breaking Dawn - Part 1 -- 00s
Captain America: The First Avenger -- Forties
Conan the Barbarian -- Thirties
Cowboys & Aliens -- 00s
Diary of a Wimpy Kid: Rodrick Rules -- 00s
The Green Hornet -- Thirties
Green Lantern -- Sixties*
Harry Potter and the Deathly Hallows: Part 2 -- Nineties
I Am Number Four -- 00s
Mission Impossible -- Sixties
The Muppets -- Fifties
Pirates of the Caribbean: On Stranger Tides -- Sixties (part of Disney's movies based on rides series)
Rise of the Planet of the Apes -- Sixties
Sherlock Holmes -- Nineteenth Century
The Smurfs -- Fifties
Spy Kids: All the Time in the World -- 00s
Thor -- Sixties
Transformers: Revenge of the Fallen -- Eighties
X-Men: First Class -- Sixties
You can quibble with some of my calls here. I quibbled with myself quite a bit, going back and forth on the Adjustment Bureau (old), Cars (new), Puss-in-boots (old) and Diary of a Wimpy Kid among others, but no matter what standards you use, it's almost impossible to see anything in the data that supports the idea that these extremely long copyrights have increased the production of highly marketable properties.
At best you could argue that the extensions might have had a positive effect but it was small enough to be swamped by other technological, economic and demographic factors. At worst, you could make the case that copyright laws were approximately optimal in the middle of the Twentieth Century and that the extensions have actually inhibited innovation.
Like patents, copyrights are necessary, but highly intrusive regulations. Taken to an extreme, they distort markets, divert resources from creators to legal departments, encourage consolidation and set up onerous barriers to entry for small companies and start-ups.
For another layer of irony here, take a look at how Disney approached intellectual property in its early days.
* Technically very late Fifties (or even Forties if you count earlier character with the same name)
Also posted at MippyvilleTV.
Thursday, December 22, 2011
Two telling quotes from Bill Adair of PolitiFact
Without getting into the pros and cons of this most recent debate, I did want to share a couple of quotes from PolitiFact editor-in-chief Bill Adair:
From NPR:
"We're going to make the best calls we can in a pretty gutsy form of journalism," he says. "When we do, I think it's natural that the people on one side or other of this very partisan world we live in are going to be unhappy."And from PolitiFact itself:
The most over-the-top response (was it tongue-in-cheek?) was a rant from Jim Newell in Gawker under the headline "Why PolitiFact is bad for you." He conveniently ignored the fact that our fact-checks are based on hours of journalistic research and portrayed them as the work of rogue bloggers with a gimmicky meter.We've mentioned concerns about the decline of journalism and how various factors compound the problem. This is another one of those compounding factors: the strange obliviousness of many journalists. Adair, on the record, describes his own work as "gutsy." He holds up "hours of journalistic research" as an impressively high standard. He seems incapable of thinking of criticism as being based on anything but partisan bitterness. (If you think I'm cherry-picking here, follow the link above. The whole piece is like this.)
Even if you put aside the many criticisms of PolitiFact (spelled out cogently and with crushing thoroughness by Chait) and view them in the best (and I do mean best) possible light, the most you can say for the organization is that it's doing what we used to think of as standard due diligence from journalists.
Wednesday, December 21, 2011
Intellectual property: the story that never ends
If you look at the cars we have, they're all of course different but they have a lot of really profound similarities. You almost always turn a key in the ignition. You have your gas pedal and your break, and you push them both with your right foot. You steer them with a wheel. There's a spedometer and a fuel indicator in more-or-less the same place. They use mirrors so you can see where you're going without constantly turning your head. Would it be a better world if for twenty years someone had held a patent on a Using Mirrors To Allow Drivers To See Behind Them Without Turning Their Head? I say, no. Absent the inability of new entrants into the automobile market to copy some of the basic concepts of what a usable car looks like, we would have had much less competition and much less innovation around the real cutting edge of the automobile industry.
This was not the most interesting thing that was on Moneybox today, but it fit really well into an an evolving theme that we have been seeing recently about how the patent industry is formalizing rent-seeking. This cannot be good in the long run.
Now, it is true that I think that the driverless car is an over-rated concept. Like the jetpack, it is a neat idea that has a lot of very difficult implementation issues. In the case of the driverless car, the main issues, in my opinion, are rethinking the complex web of liability we have constructed around vehicles and smoothly integrating them into mixed use roadways.
The risk of bicycle commuting has been an extremely favorable development, despite the occasional tension between cars and bikers. But I wonder if driverless cars will be able to handle treating cyclists as other vehicles or might the smaller profile of the bike make it harder for the car to account for them? The same concerns come up with pedestrians, especially in large cities.
Still more adventures in intellectual property
The patent war between Apple Inc. and smartphone rival Samsung Electronics continues to escalate, and there's only one way to describe the latest vicious salvo::)
That's right, it appears that Samsung has initiated a lawsuit against Apple governing the company's use of emoticons.
According to a report from patent observer Florian Mueller, who has been dependably covering the worldwide patent wrestling match between Apple and Android manufacturers, one of four new patent lawsuits filed by Samsung in German court is over, once again, yes, emoticons.
Believe it or not, Samsung does indeed own a patent on smartphone use of emoticons. It won the European rights to that "technology" in 2000, and interested readers can see the actual patent here.
A few more thoughts on journalistic conformity
There are at least two potentially serious consequences to the amount of carbon we've been pumping into the atmosphere. The first is global warming. The second is the chemical and biological changes in the oceans.
Though it's difficult to compare the likely impact of phenomena this big and complex, the second problem is arguably on a level with the first, a point driven home in the LA Times' Pulitzer-winning series on the subject:
As industrial activity pumps massive amounts of carbon dioxide into the environment, more of the gas is being absorbed by the oceans. As a result, seawater is becoming more acidic, and a variety of sea creatures await the same dismal fate as Fabry's pteropods.And we haven't even gotten to the primeval toxic slime (you really do need to read the whole series).
The greenhouse gas, best known for accumulating in the atmosphere and heating the planet, is entering the ocean at a rate of nearly 1 million tons per hour — 10 times the natural rate.
Scientists report that the seas are more acidic today than they have been in at least 650,000 years. At the current rate of increase, ocean acidity is expected, by the end of this century, to be 2 1/2 times what it was before the Industrial Revolution began 200 years ago. Such a change would devastate many species of fish and other animals that have thrived in chemically stable seawater for millions of years.
Less likely to be harmed are algae, bacteria and other primitive forms of life that are already proliferating at the expense of fish, marine mammals and corals.
In a matter of decades, the world's remaining coral reefs could be too brittle to withstand pounding waves. Shells could become too fragile to protect their occupants. By the end of the century, much of the polar ocean is expected to be as acidified as the water that did such damage to the pteropods aboard the Discoverer.
Some marine biologists predict that altered acid levels will disrupt fisheries by melting away the bottom rungs of the food chain — tiny planktonic plants and animals that provide the basic nutrition for all living things in the sea.
Given their common origin, comparable severity and potential for synergistic effects, topics like acidification should show up frequently in stories about global warming. Not all the time, but I would expect to see it in at least fifteen or twenty percent of the stories. It is simply a pairing that journalists to make on a fairly regular basis, but while a search of the last twelve months of the New York Times for "climate change" produces 10,509 hits, a search on '"climate change" acidification' over the same period produces 15.
(If we do a quick, back-of-the-envelope hypothesis test on the null that most journalists are well-informed, hard-working, independent thinkers...)
The specific tragedy here is that, for all the ink that's been spilled on the impacts of carbon emissions, all we really get in the vast majority of cases are simply the same handful of stories endlessly recycled. We read dozens of articles but since the writers have converged on a tiny number of narratives we remain ill-informed.
The general tragedy is that this is the way almost all journalism works these days. Through a lack of independent thinking (often augmented by laziness and a lack of rigor), journalists quickly settle on a small number of templates which they seldom stray from, even though these templates leave out important aspect of the larger story. Stories on the environmental impacts of carbon leave out the oceans; stories on the economics of cable don't mention broadcast television; stories about the free spending ways of countries like Greece and Spain omit the fact that Spain was running a surplus before the crisis.
It would be easy to find more examples. Finding counter-examples is the tough one.
Tuesday, December 20, 2011
Prediction is difficult
In a nutshell: I've become far less confident about our ability to accurately describe possible outcomes more than a decade out. Correspondingly, I've become increasingly sceptical of the value of analyses of decisions now that attempt to assess the costs and benefits of action over horizons any longer than a decade.
I think that this was a very good complement to yesterday's discussion of inference from observational medical research. Models are hard. The more complicated the model is, the more likely something is to go wrong. Future predictions suffer from these sorts of complications -- we honestly do not know what the circumstances will be like in the future or how many unlikely events will actually happen. Over the short run, predictions can bank on it being unlikely that a lot of "low event rate but high impact" events will happen. We can also neglect the slow (but incremental variables) that are currently unnoticed but which will make a huge difference in the future.
In the same sense, looking at low event rate outcomes in incomplete data (most of pharmacovigilence), leads to a lot of innate uncertainty. In both cases, I think it makes a lot of sense to be humble about what our models can tell us and to focus on policy that accepts that there is a lot of innate uncertainty in some forms of prediction.
Hat-tip: Marginal Revolutions
Monday, December 19, 2011
Can we do observational medical research?
Andrew Gelman has a really nice post on observational medical research. How could I not respond?
In the post he quotes David Madigan who has a fairly strong opinion on the matter:
I’ve been involved in a large-scale drug safety signal detection project for the last two or three years (http://omop.fnih.org). We have shown empirically that for any given safety issue, by judicious choice of observational database (we looked at 10 big ones), method (we looked at about a dozen), and method setup, you can get *any* answer you want – big positive and highly significant RR or big negative and highly significant RR and everything in between. Generally I don’t think there is any way to say definitively that any one of these analysis is a priori obviously stupid (although “experts” will happily concoct an attack on any approach that does not produce the result they like!). The medical journals are full of conflicting analyses and I’ve come to the belief that, at least in the medical arena, the idea human experts *know* the *right* analysis for a particular estimand is false.
This seems overly harsh to me. Dr. Madigan (who I think is an amazing statistician) is working with OMAP, which I recall as being comprised of data sets of fairly low quality data (prescriptions claims for Medicare/MedicAid, GPRD and other clinical databases, and these sorts of databases). It is a necessary evil to get the power to detect rare (but serious) adverse drug outcomes. But these databases are often problematic when extended beyond extremely clear signal detection issues.
The clearest example of high quality medical data is likely to be randomized controlled double-blinded clinical trials. But there is a whole layer of data between these two extremes of data quality (prospective cohort studies, for example) that has also generated a lot of important findings in medicine.
Sure, it is true that the prospective cohort studies tend to be underpowered to detect rare adverse drug side effects (for precisely the same reason that RCTs are). But there is a lot of interesting observational medical research that does not generate conflicting results or where the experts really seem to have a good grasp on the problem. The links between serum cholesterol levels and cardiovascular events, for example, seems relatively solid and widely replicated. So do the links between smoking and lung cancer (or cardiovascular disease) in North American and European populations. There is a lot that we can learn with observational work.
So I would be careful to generalize to all of medical research.
That being said, I have a great deal of frustration with medical database research for a lot of the same reasons as David Madigan does. I think the issues with trying to do research in medical claims data would be an excellent series of posts as the topic is way too broad for a single post.