Friday, March 22, 2013

75 years of progress

While pulling together some material for a MOOC thread, I came across these two passages that illustrated how old much of today's cutting edge educational thinking really is.

First from a 1938 book on education*:
" Experts in given fields broadcast lessons for pupils within the many schoolrooms of the public school system, asking questions, suggesting readings, making assignments, and conducting test. This mechanize is education and leaves the local teacher only the tasks of preparing for the broadcast and keeping order in the classroom."
And this recent entry from Thomas Friedman.
For relatively little money, the U.S. could rent space in an Egyptian village, install two dozen computers and high-speed satellite Internet access, hire a local teacher as a facilitator, and invite in any Egyptian who wanted to take online courses with the best professors in the world, subtitled in Arabic.
I know I've made this point before, but there are a lot of relevant precedents to the MOOCs, and we would have a more productive discussion (and be better protected against false starts and hucksters) if people like Friedman would take some time to study up on the history of the subject before writing their next column.



* If you have any interest in the MOOC debate, you really ought to read this Wikipedia article on Distance Learning.

Wow, just wow!

Via Thomas Lumley:
You may have heard about the famous Hawthorne experiment, where raising light levels in a factory improved output, as did lowering them, as did anything else experimental. The original data have been found and this turns out not to be the case.
The mind boggles at just how often I have used this example and how wrong it was.  I have read the paper once, not that closely, but the overall impression I have is that Levitt is correct here. 

Thursday, March 21, 2013

Cloud computing


Kevin Drum
What's different is that Google's products are all cloud-based. When Google Reader goes away on July 1, that's it. It's gone. If it were an ordinary bit of software that I'd installed on my PC, this wouldn't be a problem. It would keep on working for years even if it never got another update. I'd need to replace it eventually—because of an OS upgrade or a desire for new features that finally got too strong—but I'd probably have years to work that out.
 
I think that this element of the new model of software is worth a lot more attention then it is getting.  Just look at the sim city fiasco and ask what would happen if Microsoft made the same mistakes with a new cloud version of Office.  Now Microsoft is an extraordinarily well run company, so the chances of that are quite small.

But the general trend towards renting access is going to make interuptions of service (or just the internet misbehaving) a much bigger deal. 

Wednesday, March 20, 2013

Apple, J.C. Penney* and fitness landscapes in everything

James Kwak has an excellent piece on Ron Johnson's unfortunate run as CEO of J.C. Penney.
According to today’s Wall Street Journal article, Johnson quickly eliminated coupons and most sales at J.C. Penney.

“Johnson bristled when a colleague suggested that he test his new no-discounts strategy at a few stores. . . . ‘We didn’t test at Apple,’ the executive recalled Mr. Johnson . . . saying.”

Well, yeah. Apple doesn’t discount because they sell stuff that people really, really want and that they can’t get anyplace else. And they don’t test because Steve Jobs refused to. At Penney? Sales have fallen by about 30 percent.

This doesn’t mean Johnson is stupid, or that he’s going to fail as CEO. Apparently he has partially reversed his early decision, which is a good sign. But it brings up a common feature of external CEO hires. Companies in a perceived crisis often look outside for a new leader, hoping for a superman (or -woman) who can singlehandedly turn around the organization. Not completely illogically, they tend to look for people at successful companies. “Make us more like X,” they pray. In Penney’s case, X = Apple.

There are two important questions they tend not to ask, however. First, was Apple successful because of Johnson, or was he just along for the ride? Yes, he was the main man behind the Apple Store (although, according to Walter Isaacson’s book, Steve Jobs was really the genius behind everything). But was the success of the Apple Store just a consequence of the success of the iPhone?

Second, even if Johnson was a major contributor to Apple’s success, how much of his abilities are transferable to and relevant to J.C. Penney? There’s a big difference between selling the most lusted-after products on the planet and selling commodities in second-rate malls. When someone has been successful in one context, how much information does that really give you about how he will perform in a new environment?
The obvious interpretation here is as a cautionary tale of executive hubris, but you can also look at it in terms of fitness landscapes (the following will be fairly straightforward, but if the concept doesn't ring a bell you might want to check herehere, and of course, here).

Let's try thinking in terms of the retail fitness landscape (presented with the usual caveat that I'm way out of my field). Just how distant is the Apple Store from J.C.P.?

Apple Stores are a relatively small boutique chain (400 stores total, 250 in the U.S.) concentrated heavily in prime commercial urban and upscale suburban areas. Their customer demographics tend toward upper income, fashion-conscious early adopters with a demonstrated willingness to pay a premium for quality. Inventories consist of a few heavily-branded, high-quality, high mark-up items, all of which come from one very visible supplier with an excellent reputation. This allows an unusual (perhaps unique -- there's not another Apple) symbiotic relationship. The stores give the supplier a presence and a profit center while the stores benefit from the supplier's powerful brand, large advertising budget and unparalleled PR operation.

In terms of customers, products, brand, retail space, vendors relations, logistics, scale and business model, moving from the Apple Store to JCP was a shift to a distant part of the retail landscape. What Johnson did, in essence, was say "these coordinates are associated with an extremely high point on the landscape (the Apple Store). Even though we've made large shifts in many of these dimensions, we can keep the same coordinates for the other dimensions and we'll find another optima."

To put this in context, here's a useful example from T. Grandon Gill
Suppose, for example, you had a fitness function that mapped the list of ingredients to an objectively determined measure of “taste fitness” for all the recipes in a cookbook. If you were to do a regression on taste (dependent variable) using the ingredients (independent variables), you might find—for instance—that garlic shows a high positive significance. What would that tell you (other than, possibly, that the individuals rating the recipes enjoyed garlic)? What it would definitely not tell you is that you could improve your recipe for angel cake by adding garlic to it. Indeed, the whole notion of applying a technique that assumes linear decomposability to a fitness landscape that is so obviously not decomposable is preposterous.
Substitute a low level of coupons for a high level of garlic and you have a pretty good picture of the JCP strategy.

How do we know the retail landscape is rugged? We don't, but we do have considerable evidence that certain approaches work better in some circumstances than they do in others (i.e. there are multiple local optima). More to the point, Johnson's entire strategy pretty much assumed that the many small and large players in the department store area (including Macy's, Sear, Dillards, Kohls, the pre-Johnson JCP and countless smaller chains and individual stores) were trapped in one or more low-quality optima. When you have this many diverse companies in a market this competitive and this mature, you expect to see a fair amount of something analogous to gradient searching ("That worked; let's do more to it."). If they haven't settled on your optimum point, it's almost certainly because they settled on another.

The lessons -- when you move into an established market you should probably assume the established players know the field and you should probably not assume that what worked somewhere else will work here -- could be (and were) reached without referring to fitness landscapes, but they do make a good framework for approaching a wide variety of problems.

Johnson moved to an unfamiliar region of a probably rugged landscape and refused to explore the surrounding area for higher points despite the fact that numerous other players that had explored the region had settled on a completely different points. When you phrase it this way, it doesn't sound good (of course, Johnson's approach doesn't sound good when you phrase it most ways).


* The 'C' stands for 'Cash' -- no, really.

Tuesday, March 19, 2013

If you actually want to close the achievement gap...

An excellent story from KPCC's Take Two. Considerably more effective in the audio version if you have the time.
Teenager Michelle Zamora has big dreams to become a civil engineer.

“Since 4th grade,” Zamora says, “I told myself I want to go to Stanford University.”

Zamora would be the first in her family to go to college, and as a self-described “smart kid,” Stanford never seemed too far-fetched an idea.

But at age 15, Michelle Zamora made a mistake: she got pregnant. And her dreams of college seemed to vanish.

Like thousands of other California teens, Zamora dropped out of high school.

She is among the majority of the state's teen moms --83%-- that come from low-income households. According to the California Department of Education, the state ranks number one nationwide with its rate of pregnancy among teens.

The worst part, she said, was the way most people assumed she was condemned to future that she didn’t want. People told her “well, you’re just going to be another teenager on welfare,” or “you’re not going to make it.” Zamora started to believe them.

And then she found out about a program in Baldwin Park that has given her renewed hope.

In the late 1990s, officials in the Baldwin Park Unified School District worried that they were losing too many students due to pregnancy. Using federal Early Head Start funds, the district launched an innovative program to ensure teen moms could stay in school.

When Zamora’s daughter was born in 2011, a friend told her about North Park high school which provides on-site daycare so teen moms and dads can complete coursework.

A Continuation high school, North Park enrolls students who failed or dropped out, but now want to finish high school. Its child care program is one of 18 at high schools across Los Angeles county that cater to teen parents. Since 1999, about 60% of North Park students have graduated and gone on to higher education.

















Monday, March 18, 2013

Today's vocabulary term is "flack-to-hack ratio"

Felix Salmon has one of those that-explains-a-lot posts up on his blog:
Quartz, in this deal, is getting one article, which needs a fair amount of editing; it’s a tiny proportion of Quartz’s daily output. Meanwhile, Brandtone is getting something very valuable indeed. Just look at the US flack-to-hack ratio: it’s approaching 9:1, according to the Economist, which means that for every professional journalist, there are nine people, some of them extremely well paid, trying to persuade that journalist to publish something about a certain company. That wouldn’t be the case if those articles weren’t worth serious money to the companies in question.

How valuable? How about somewhere between $250,000 and $1 million? That’s the amount of money that Fortune’s ad-sales team was asking, earlier this month, for a new product called Fortune Trusted Original Content:

Similar to licensed editorial content, TOC involves creating original, Fortune-branded editorial content (articles, video, newsletters) exclusively for marketers to distribute on their own platforms.

After news of the TOC program appeared, it was walked back — abolished, essentially. You can see why Fortune’s top editorial brass would be uncomfortable with the idea that Fortune editorial content could be commissioned by, and appear for the sole benefit of, advertisers. So now they’re going back to the old model, of just allowing advertisers to license (reprint, basically) stories which were independently commissioned and published by Fortune’s editors.

Still, the price point on the now-aborted TOC program is revealing. The cost of the content, from a “trusted freelancer”, would probably not be much more than a couple of thousand dollars — but the cost of the content to the advertiser could be as much as $1 million. The difference is entirely accounted for by the value of the Fortune brand.
The flack-to-hack ratio may have something to do with another recurring topic, the almost complete lack of coverage of the reemergence of over-the-air television (see here, here, here, here, and... hell, just do a search). Weigel Broadcasting may be an extraordinarily well run company, but as long as they run a largely flackless operation, you'll probably never hear about them.








I apologize if I posted this before...

... but this Marketplace piece on a program to get disadvantaged families out of bad neighborhoods is definitely worth checking out.

Sunday, March 17, 2013

Weekend movie blogging -- Herman Mankiewicz in Oz

With Oz, the Great and Powerful being both at the box office, it's worth taking a minute to give a little credit to someone who made an essential but largely unrecognized contribution to the original classic, Herman Mankiewicz. Famed film historian/director/sycophant, Peter Bogdanovich has spent the past few decades trying to undermine Mankiewiez's reputation since Pauline Kael had the temerity to suggest that Mankiewicz was the primary author of the script of Citizen Kane.

Bogdanovich has sold the "Herman Mankiewicz was a talented hack" line to countless credulous journalists and film students over the years and supported the claim with a highly selective recounting of Mankiewiez's resume.  With Oz back in such a big way, one of the films Bogdanovich omits is particularly relevant:
In February, 1938, he was assigned as the first of ten screenwriters to work on The Wizard of Oz. Three days after he started writing he handed in a seventeen-page treatment of what was later known as "the Kansas sequence". While Baum devoted less than a thousand words in his book to Kansas, Mankiewicz almost balanced the attention on Kansas to the section about Oz. He felt it was necessary to have the audience relate to Dorothy in a real world before transporting her to a magic one. By the end of the week he had finished writing fifty-six pages of the script and included instructions to film the scenes in Kansas in black and white. His goal, according to film historian Aljean Harmetz, was to "to capture in pictures what Baum had captured in words--the grey lifelessness of Kansas contrasted with the visual richness of Oz." He was not credited for his work on the film, however.
There are, of course, many things that have to go right to produce a truly iconic film, but if you had to pick the one element that made the film work and made people remember it, you'd probably have to go with Mankiewicz's contribution.


Saturday, March 16, 2013

Playing with paper over at You Do the Math

Been thinking a lot about paper from a material-science standpoint (the mind wanders when SAS runs slowly), specifically about using paper to teach kids about the physical properties of different shapes and how we test them.

I've kicked off an ongoing thread on the subject at my math teaching blog. The first (with the admittedly pretentious title, "Reseeing Paper") is an overview of paper as a way of exploring shape. The second ("Paper Towers") lays out the rules for some engineering projects/competitions suitable (almost without alteration) for classes ranging from fourth grade to freshman engineering (though one would like to think that the results for the freshmen would be a bit more sophisticated). The projects would also be suitable for science-based family activities. There is more of this to come (I haven't even started on corrugation).

Check it out and if you know of any teachers who are looking for new (and cheap) manipulatives, please send them the links. There are any number of potential lesson plans here.

Thanks,
Mark

p.s. Suggestions are always welcome.

Friday, March 15, 2013

When you hear proposals to control tuition by reducing instruction cost...

When you hear proposals to control tuition by reducing instruction cost (in the recent discussions of MOOCs for example), here are a couple of numbers you should keep in mind. They haven't been adjusted for inflation and they're based on a sample size of one, but I still thing they'll gave you a pretty clear picture.

Back in the Nineties I did a four year stint as a lecturer at a large state university. The standard load for lecturers was four courses a semester and the pay was seventeen thousand and change. (I was only on a half load with the other half of my funding coming from other duties like supervising grad students but the following numbers still hold).

If you break it down that comes to less than twenty-five hundred a three hour course. With the exception of a couple of upper level courses, my sections generally ranged from twenty-five to one hundred and fifty students. That means that the per student cost associated with the lecture portion of one of those courses ranged from less than one hundred dollars at the top end to around fifteen dollars at the bottom.

If someone has some current numbers I'd be glad to update the post but as far as I can tell, while tuition has continued to climb since my lecturer days, adjunct salaries have, at best, kept up with inflation and certainly haven't grown enough to be a major driver of education costs. But what's really amazing isn't that you can get people to take these jobs at this pay; it's that you can find wildly overqualified people -- promising scholars, gifted lecturers -- willing to take these jobs. That's how flooded the supply of would-be professors is.

There are well-paid, even over-compensated professors out there but they are all paid primarily for something other than teaching, be it their research or their reputation (which reflects on the school) or the grants they pull in. We can and probably should have a serious discussion about these roles (maybe starting here) but that's a different conversation.

As for controlling tuition by reducing instructor costs, that conversation has to start with a realistic picture of how much people who are hired simply to teach actually make.

Thursday, March 14, 2013

The Rise of P-Value

In the middle of a recent conversation prompted by this post by Andrew Gelman, I struck me that I couldn't recall encountering the term p-value before I started studying statistics in the Nineties. These days you frequently run across the term in places like the NYT article Gelman cited or this piece in the Motley Fool; were they always there and I just missed them?

Fortunately we have Google's Ngram viewer to resolve these questions and apparently the answer is a qualified yes. While people were talking about p-values at the beginning of the decade, more people were talking about them by the end.

The question now is how much of that growth is attributable to general interest writing like the NYT.



Wednesday, March 13, 2013

Epidemiology and Truth

This post by Thomas Lumley of Stats Chat is well worth reading and thinking carefully about.  In particular, when talking about a study of process meats and mortality he opines:

So, the claims in the results section are about observed differences in a particular data set, and presumably are true. The claim in the conclusion is that this ‘supports’ ‘an association’. If you interpret the conclusion as claiming there is definitive evidence of an effect of processed meat, you’re looking at the sort of claim that is claimed to be 90% wrong. Epidemiologists don’t interpret their literature this way, and since they are the audience they write for, their interpretation of what they mean should at least be considered seriously.


I think that support of an association has to be the most misunderstood piece of Epidemiology (and we epidemiologists are not innocent of this mistake ourselves).  The real issue is that cause is a very tricky animal.  It can be the case that complex disease states have a multitude of "causes".

Consider a very simple (and utterly  artificial) example.  Let assume (no real science went into this example) that hypertension (high systolic blood pressure) occurs when multiple exposures over-whelms a person's ability to compensate for the insult.  So if you have only one exposure off of the list then you are totally fine.  If you have 2 or more then you see elevated blood pressure.  Let's make the list simple: excessive salt intake, sedentary behavior, a high stress work environment, cigarette smoking, and obesity.  Now some of these factors may be correlated, which is its own special problem.

But imagine how hard this would be to disentangle, using either epidemiological methods or personal experimentation.  Imagine two people who work in a high stress job, one of which eats a lot of salt.  They both start a fitness program due to borderline hypertension.  One person sees the disease state vanish whereas the other sees little to no change.  How do you know what was the important factor?

It's easy to look at differences in the exercise program; if you torture the data enough it will confess.  At a population level, you would expect completely different results depending on how many of these factors the underlying population had.  You'd expect, in the long run, to come to some sort of conclusion but it is unlikely that you'd ever stumble across this underlying model using associational techniques. 

The argument continues:
So, how good is the evidence that 90% of epidemiology results interpreted this way are false? It depends. The argument is that most hypotheses about effects are wrong, and that the standard for associations used in epidemiology is not a terribly strong filter, so that most hypotheses that survive the filter are still wrong. That’s reasonably as far as it goes. It does depend on taking studies in isolation. In this example there are both previous epidemiological studies and biochemical evidence to suggest that fat, salt, smoke, and nitrates from meat curing might all be harmful. In other papers the background evidence can vary from strongly in favor to strongly against, and this needs to be taken into account.
 
This points out (correctly) the troubles in just determining an association between A and B.  It's ignoring all of the terrible possibilities -- like A is a marker for something else and not the cause at all.  Even a randomized trial will only tell you that A reduces B as an average causal effect in the source population under study.  It will not tell you why A reduced B.   We can make educated guesses, but we can also be quite wrong.

Finally, there is the whole question of estimation.  If we mean falsehood to be that the size of the average causal effect of intervention A on outcome B is completely unbiased then I submit that 90% is a very conservative estimate (given if you make truth an interval around the point estimate to the precision of the reported estimate given the oddly high number of decimal places people like to quote for fuzzy estimates). 

But that last point kind of falls into the "true but trivial" category . . .


Tuesday, March 12, 2013

Landscapes in everything

SLIGHTLY UPDATED

One of the issues I have with economics exceptionalism is the word 'everything,' as in "markets in everything" or "the hidden side of everything." Not that there's anything wrong with applying economic concepts to a wide variety of questions (I do it myself), but at some point they become overused and start crowding out ideas that are better in a given context.

Think about all the times you heard phrases like the 'marriage market' often followed by the implicit or explicit suggestion that the tools of economics hold the key to understanding all sorts of human behavior even in cases where the underlying assumptions of those tools probably don't apply. Now, for example, compare that to the number of times you've recently heard someone describe something as a fitness landscape when they weren't talking about evolution or physics (OK, that's not the term physicists generally use but the concept is basically the same).

Landscapes are a powerful and widely applicable concept, arguably more so than markets (they are also a long-time fascination of mine). Ideas like gradient searches, perturbation, annealing and, most of all, local optimization are tremendously useful, both to explain complex problems and to suggest approaches for solving them. Once you start thinking in those terms you can see landscapes about as often as Tyler Cowen sees markets.

You can even find researchers coming up with the kind of unexpected, everyday examples that you might expect in a Steven Levitt column.

My favorite recent example (at least recent to me) is T. Grandon Gill's observation that recipes in a cookbook are essentially the coordinates of local optima on a culinary fitness landscape where the amount of each ingredient are the dimensions and taste is the fitness function (technically we should add some dimensions for preparation and make some allowance for the subjectivity of taste, but I'm keeping things simple).

This is a great example of a rugged landscape that everyone can relate to. You can find any number of delicious recipes made with the same half dozen or so ingredients. As you start deviating from one recipe (moving away from a local optima), the results tend to get worse initially, even if you're moving toward a better recipe.

Approaching something as a rugged landscape can provide powerful insights and very useful tools, which leads to another concern about economic exceptionalism -- economics as a field tends to make little use of these models and many economists routinely make modeling assumptions that simply make no sense if the surface being modeled really is rugged.

I asked Noah Smith* about this and as part of his reply he explained:
But for analyzing the equilibrium state of the economy - prices and quantities - economists tend to try as hard as they can to exclude multiple equilibria. Often this involves inventing arbitrary equilibrium criteria with zero theoretical justification. This is done routinely in micro (game theory) as well as in macro. An alternative procedure, commonly used in macro by DSGE practitioners, is to linearize all their equations, thus assuring "uniqueness". Some researchers are averse to this practice, and they go ahead and publish models that have multiple equilibria; however, there is a strong publication bias against models that have multiple equilibria, so many economists are afraid to do this. An exception is that some models with two equilibria (a "good" equilibrium and a "bad" or "trap" equilibrium) do get published and respected. Models with a bunch of equlibria, or where the economy is unstable and tends to shift between equilibria on its own at a high frequency, are pretty frowned upon.
This doesn't mean that economists can't work with these concepts, but it does mean that as economists increasingly dominate the social sciences, approaches that don't fit with the culture and preferred techniques of economics are likely to be underused.

And some of those techniques are damned useful.

* now with source.

Monday, March 11, 2013

Some epidemiology for a change

John Cook has an interesting point:
When you reject a data point as an outlier, you’re saying that the point is unlikely to occur again, despite the fact that you’ve already seen it. This puts you in the curious position of believing that some values you have not seen are more likely than one of the values you have in fact seen.
 
This is especially problematic in the case of rare but important outcomes and it can be very hard to decide what to do in these cases.  Imagine a randomized controlled trial for the effectiveness of a new medication for a rare disease (maybe something memory improvement in older adults).  One of the treated participants experiences sudden cardiac death whereas nobody in the placebo group does. 

One one hand, if the sudden cardiac death had occured in the placebo group, we would be extremely reluctant to advance this as evidence that the medication in question prevents death.  On the other hand, rare but serious drug adverse events both exist and can do a great deal of damage.  The true but trivial answer is "get more data points".  Obviously, if this is a feasible option it should be pursued. 

But these questions get really tricky when there is simply a dearth of data.  Under these circumstances, I do not think that any statistical approach (frequentist, Bayesian or other) is going to give consistently useful answers, as we don't know if the outlier is a mistake (a recording error, for example) or if it is the most important feature of the data.

It's not a fun problem. 

More weekend work avoidance -- the pleasures of microbudgets

Watched the first and second arcs of a fairly obscure British science fiction show from 1979 called Sapphire and Steel. It was apparently intended as a low-budget answer to Doctor Who (which those familiar can attest was not exactly the Avatar of the Seventies).  The result was a sci-fi/fantasy/horror show that had to be shot on standing sets with small casts and very limited special effects.

The result is some really impressive constrained problem solving by writer P.J. Hammond (with considerable assistance from directors David Foster, Shaun O'Riordan and the show's solid leads, David McCallum and Joanna Lumley, the only expensive aspects of the production). Hammond did sometimes lapse into dramatic Calvinball, obviously making up new rules now and then to get himself out of narrative corners, but those bits are easy to overlook, particularly when watching the ways he found to work around the rules he was handed by the producers.

In lieu of optical effects and creature make-up, you get a spot of light on the floor, a shadow on the wall, an ordinary thing in a place it shouldn't be. In an ironic way, the show would almost certainly look cheaper now if they had spent the extra money on those late Seventies effects then. In a sense, they didn't have enough money to be cheesy (except perhaps in the opening title).

There's a bigger point to be made about the costly vs. the clever but the weekend is almost up and my work is going to be unavoidable in a few hours.

Sunday, March 10, 2013

Weekend gaming -- new entries at You Do the Math

I've got three big ongoing threads planned for my teacher support blog, one on the SAT and one on a special class of manipulatives, and one on teaching programming, so naturally I've been avoiding those topics and writing about games instead: If you also have an interest in games and work to avoid, you might drop by and check out:

The Exact Chaos Game -- fleshing out a suggestion by John D. Cook, this lets players bet on iterations of a surprisingly unpredictable function.

Kriegspiel and Dark Chess -- more Wikipedia than me but worth checking out if you'd like to see what chess might look like as a game of imperfect information.

Facade Chess -- along the same lines, here's an "original" imperfect-information variant where a subset of the  pieces may be disguised as other pieces.

Saturday, March 9, 2013

Do op-ed writers provide their own hyperlinks?

Or is some intern handed the copy and told to find some appropriate citations? I generally assume that the links are an intrinsic part of anything written specifically for online consumption but what about the online version of something primarily intended for print?

Take this op-ed by Joe Scarborough and Jeffrey D. Sachs writing for the Washington Post which starts with the following paragraph:
Dick Cheney and Paul Krugman have declared from opposite sides of the ideological divide that deficits don’t matter, but they simply have it wrong. Reasonable liberals and conservatives can disagree on what role the federal government should play yet still believe that government should resume paying its way.
As a commenter on Krugman's blog pointed out, if you click on Krugman's name in that paragraph, you'll end up at a post that starts as follows:
Right now, deficits don’t matter — a point borne out by all the evidence. But there’s a school of thought — the modern monetary theory people — who say that deficits never matter, as long as you have your own currency.

I wish I could agree with that view — and it’s not a fight I especially want, since the clear and present policy danger is from the deficit peacocks of the right. But for the record, it’s just not right.
In other words, to support the claim that Krugman said deficits don't matter, Scarborough and Sachs point to Krugman saying explicitly that people who say deficits don't matter are wrong. Krugman then spends pretty much the entire post arguing that deficits will matter a great deal once we're out of the liquidity trap. Here's the key section.
So we’re talking about a monetary base that rises 12 percent a month, or about 400 percent a year.

Does this mean 400 percent inflation? No, it means more — because people would find ways to avoid holding green pieces of paper, raising prices still further.

I could go on, but you get the point: once we’re no longer in a liquidity trap, running large deficits without access to bond markets is a recipe for very high inflation, perhaps even hyperinflation. And no amount of talk about actual financial flows, about who buys what from whom, can make that point disappear: if you’re going to finance deficits by creating monetary base, someone has to be persuaded to hold the additional base.
This isn't to say that this post is in agreement with the op-ed; in terms of immediate action they are taking completely opposite positions, It would have easy to spell out the distinction, but instead Scarborough and Sachs simply make a claim then point us to something that directly contradicts it.

The strange thing here is that you could find any number of posts where Krugman focuses on the case for stimulus and largely or entirely ignores the dangers of deficits. Any of these would have supported Scarborough and Sachs' thesis. Instead, though, the authors pick possibly the strongest anti-deficit argument Krugman has made in the past five years.

I can understand Scarborough. He is, and I don't mean this as a pejorative, a TV personality. That's a rare and valuable talent and Scarborough is very good at it. It is not, however, a profession that depends upon reputation in the conventional sense. As long as a TV personality does nothing to betray his public persona, almost all press is good press.

For Sachs, though, reputation is extraordinarily important. This is an important and influential scholar, someone whose ideas carry great weight with policy makers. Here's a representative passage from Wikipedia:
Sachs is the Quetelet Professor of Sustainable Development at Columbia's School of International and Public Affairs and a Professor of Health Policy and Management at Columbia's School of Public Health. He is Special Adviser to United Nations Secretary-General Ban Ki-Moon on the Millennium Development Goals, having held the same position under former UN Secretary-General Kofi Annan. He is co-founder and Chief Strategist of Millennium Promise Alliance, a nonprofit organization dedicated to ending extreme poverty and hunger. From 2002 to 2006, he was Director of the United Nations Millennium Project's work on the Millennium Development Goals, eight internationally sanctioned objectives to reduce extreme poverty, hunger, and disease by the year 2015. Since 2010 he has also served as a Commissioner for the Broadband Commission for Digital Development, which leverages broadband technologies as a key enabler for social and economic development.
Silly, avoidable errors undercut Sachs' ability to continue this good work.

Which leads back to my original question. Did Jeffrey Sachs actually agree upon a link that contradicted the point he was trying to make or are links, like headlines and blurbs, often added after a piece is submitted?

Thursday, March 7, 2013

More on Marissa Mayer

I think that this is a very good point:
It also seems like a feminist mistake to expect women entrepreneurs to create little utopias instead of running extremely successful businesses. Mayer was attacked recently for her decision not to allow employees to work at home. She is a woman, this line of thinking goes, how could she think women should have to work away outside of their houses, away from their children? But why should Marissa Mayer have some special responsibility to nurture her employees with a cozy, consummately flexible work environment just because she is a woman? Isn’t her responsibility to run a company according to her individual vision? If we want powerful female entrepreneurs shouldn’t we allow them to pursue entrepreneurial power?
 
 I am not actually 100% sure that the decision to end "work at home" really hurt woman at Yahoo! (as a class, clearly individual workers of both genders could have had their work lives disrupted) given that men are more likely to work at home than women.  Mayer's previous company (Google) tries to limit the number of telecommuters and it is hardly unreasonable that a new CEO would want to draw on successful business models that she has personal experience with. 

Now could this policy change have been done more artfully? Sure.  But I am amazed by the duration of this discussion in the media and how much insight it is bringing into the whole work at home phenomenon. 

Admittedly, it is a competitive field

Thomas Lumley is an early contender for identifying the worst chart of 2013.  This special breed of awful is accomplished by creating a chart that actually takes more effort to process than text describing the differences would.  Since the point of charts is to convey information efficiently, there really is no good reason for this chart to exist. 

Of course, as a long time SAS programmer I am biased against graphical displays of data in general (you would be too if you had to use gplot and gchart).  But I think that this example will be disliked by the R and STATA crowd too. 

Wednesday, March 6, 2013

Forwarded almost without comment

This story from Reuters is outside of my area of expertise so I'm just going to make this blanket recommendation. This is a solid piece of reporting on the not easy-to-cover fields of epidemiology, biostatistics and the economics of health care.

Special Report: Behind a cancer-treatment firm's rosy survival claims

Edit (Joseph): Andrew Gelman correctly points out that the authors are Sharon Begley and Robin Respaut.  This report is useful to me as another reason that we need to have a control arm for randomized trials.  It isn't enough to know what the rate is for conventional care and contrast a novel therapy with it.  You need to also account for the selection effects among the population receiving the novel therapy.  Randomization is a very nice way to accomplish this outcome in a generally understood manner. 

Tuesday, March 5, 2013

Educational dilemmas

This is entirely correct:
You can hold us accountable for how much our graduates learn.  You can hold us accountable for how many students graduate. You can even hold us accountable for both of those at the same time.  And, amazingly enough, you can hold us accountable for doing this while educating a broad spectrum of public high school grads. What you cannot do is hold us accountable for all of those things AND the cost/time required for them to graduate.  Getting lots of people through in a short time frame, and teaching them a lot along the way, requires a lot of attention and a lot of support (whether financial aid so they can focus on school rather than work, or tutoring and small classes and all that, or even extracurriculars to help them develop certain “soft skills”), and that costs money.  So pick any two: Quality, quantity, and cost (which is directly related to time).  If you say that students are learning less and less, believe me, you’re right.  Just don’t tell me that you want me to fix that AND graduate more students without some major changes to How Things Are Done.

I think the same principle applies to high school education.  Due to the modern phobia of taxes, people do not want to pay more for education.  Yet there is a constant pressure for students to lean more and for education to inclusive/accessible.  I am all for finding ways to be more efficient and evidence based in educational spending.  But it doesn't help if the initial conditions are impossible to meet.

Monday, March 4, 2013

Things that tempt you to write posts you don't have time for

Ludwig von Mises:
The age in which the radical anticapitalistic movement acquired seemingly irresistible power brought about a new literary genre, the detective story. The same generation of Englishmen whose votes swept the Labour Party into office were enraptured by such authors as Edgar Wallace. One of the outstanding British socialist authors, G. D. H. Cole, is no less remarkable as an author of detective stories.

The Passing Tramp admirably handles the rebuttal, complete with a reference to that sterling Tory, Lord Wimsey.

Sunday, March 3, 2013

Another solution

Paul Krugman has a good point:
Still, isn’t it bizarre that governors who protest bitterly about the cost of Obamacare, and in general about wasting taxpayers’ money, are willing to throw away lots of money via corporate welfare? Actually, no; it’s only puzzling if you think they believe anything they say.
The context is the decision to allow Arkansas to expand the Affordable Care Act exchanges instead of Medicaid.  Aaron Carroll:
Many claimed that the ACA cost too much. They said it would raise the deficit. They opposed the expansion not only because it raised the federal price tag, but also because it was “fiscally unsustainable” for states in the long run. I took them at their word.

I’m now surprised that they prefer a solution that costs more.
Maybe what we really need to do is randomize?  After all, the secondary benefits of high health care spending are definitely unclear.  The United States isn't exceptional relative to Canada or France in terms of medical outcomes.  The benefits to medical innovation could be attempted via a stronger NIH with a broader mandate.  So the only real question is whether private insurance can result in innovations that reduce overall costs and/or improve outcomes.

Surely randomization of states could provide some really useful information and solve this question more directly?  After all, don't we think of it as the gold standard for a causal inference?  And it would be easy to randomize states to several possible versions of the ACA (no expansion, exchanges, Medicaid expansion, public option to allow the uninsured to purchase Medicaid). 

Is there a good reason not to do this?

P.S. Here is a good example of randomization giving us information in an area that is equally difficult for inference.

Saturday, March 2, 2013

Extremes in student feedback

After I'd been teaching at a school in Watts for a while, I learned that my predecessor had once been overpowered and left tied to his chair by an angry class. This struck me as notable because

A. Despite its location, this was not a rough school

B. My predecessor had been tied to a chair.

When I pressed other faculty members for more details, they explained (rather nonchalantly for my taste), "he was a really bad teacher."

Obviously, most jobs are easier and more pleasant if you're good at them, but this is particularly true in education. Teachers face constant, immediate and often intense feedback from students, something that is greatly intensified when you go to disadvantaged schools in the inner-city or poor rural areas like the Mississippi Delta (where I also taught).

Students get angry at bad instruction and they take advantage of bad classroom management. When you add the amplification that comes with the complex social dynamics of kids and adolescents, teaching can be a truly miserable job if you can't get the hang of doing it right.

This is a large part of the reason why so many new teachers leave the profession. Even after having invested years of study and tens of thousands of dollars, they walk away with a degree that's good for little else because, for them, the job actually is that terrible. By contrast, for those who are good at it, who can explain ideas clearly and establish a rapport with kids and keep a class focused and on task, teaching can be a most enjoyable and satisfying job.

You don't have to be a statistician to see the potential selection effect here. It should certainly be addressed when discussing the impact of bad teachers or proposing incentive pay/dismissal plans for improving education.

It should be addressed but it usually isn't.

Friday, March 1, 2013

Like judging an archery contest after they took down the target...

This is a really small part part of a bigger story (and probably the subject of more posts) but having been on both sides of video evaluations I had a personal problem with this statement from Thomas Kane (which comes to us via Andrew Gelman),
While the mean score was higher on the days that the teachers chose to submit, once you corrected for measurement error, a teacher’s score on their chosen videos and on their unchosen videos were correlated at 1.
Just to be clear, I don't have any problem with this kind of evaluation and I really like Kane's point about using 360s for teachers, but the claim of perfect correlation has raised a red flag for almost every statistically literate person who saw it. You can see an excellent discussion of this at Gelman's site, both in the original post and in the comments. All the points made there are valid but based on my experience I have one more stick for the fire.

For the sake of argument, let's assume that the extraordinary idea that rank is preserved, that the nth teacher on his or her best day is still worse than the (n+1)th teacher on his or her worst day, is true. For anything more than a trivially small n that would suggest an amazing lack of variability in the quality of lessons from teachers across the spectrum (particularly strange since we would expect weaker and less experienced teachers to be more variable).

But there's a source of noise no one's mentioned and in this case it's actually a good thing.

Except for special cases, teachers walk through the door with a great deal of information about their classes; they've graded tests and homework papers; they've seen the reaction to previous lessons, they've talked with students one-on-one. You would expect (and hope) that these teachers would use that information to adjust their approach on a day to day basis.

The trouble is that if you're evaluating teachers based on an observation (particularly a video observation), you don't have any of that information. You can't say how appropriate a given pace or level of explanation is for that class that day. You can only rely on general guidelines.

Which is not to say that good evaluators can't form a valuable assessment based on a video of a lesson. I'm a big believer in these tools both for staff development and (within reason) evaluation, but it's a inexact and often subjective process. You can get a good picture and diagnose big problems but you will never get the resolution that Kane claimed.

There are other problems with this interview, but the correlation of one should have been an easy catch for the reporter. You should never let an interview subject go unchallenged when claiming perfect results.

Wednesday, February 27, 2013

More on Yahoo! and flex time

A lot of the framing of the Yahoo! decision to ban work at home has been painting it as anti-family or anti-feminism.  Dana Goldstein has a very good refutation of this idea.  A couple of good points:

No one is forcing me to take sole responsibility for these tasks. If I don't do them when I'm "working from home," they will still get done. My boyfriend and I will split them up, or do them together. But here's the thing: It's really hard for me to be at home and ignore my domestic to-do list. I have a voice in my head telling me that until my apartment is neat, clean, and stocked with fresh food, it's perfectly okay to procrastinate on my real jobs, the ones for which I get paid: reporting, writing, and editing. After nearly three years of freelancing, I've learned that I shouldn't work from home more than one or two days per week. I now commute from Brooklyn into "the city" almost every morning, to work at the New York Pubic Library on 5th Avenue and 42nd Street. Yes: I voluntarily spend my days in midtown Manhattan, eat lunch at the ubiquitous Hale & Hearty Soups, and dodge tourists in the subway.


So here's my tentative conclusion. Flex-time is a feminist issue. Working from home full time? Maybe not so much. And here are some very definite feminist issues: Access to high-quality, affordable childcare. Paid sick leave, maternity leave, and paternity leave. Male partners who pull their weight at home.


I also think that there is an issue of priviledge involved in these discussions.  Nobody has any trouble with the idea that an employee of McDonald's is unlikely to be able to effectively work from home.  In fact, there is evidence that employers in these sectors are trying to schedule time very flexibly in order to improve efficiency (at the cost of being able to offer a predictable schedule to employees). 

It is true that the best way to manage knoweldge based workers is via performance based metrics.  But these can be very difficult to implement if the office also has a very flexible reporting structure.  It is true that counting cars in the parking lot is a bad plan at the individual level.  But that is an issue of flex time and not the ability to work from home.  In the face of managers with growth opportunities, managing remote workers is harder than local workers (I have done both and the former requires a lot more finesse). 

So I think that Dana is on to something pretty important here. 

Monday, February 25, 2013

Yahoo! is getting bad press

But maybe it shouldn't be.  There has been a lot of discussion about the end of the work from home program at Yahoo! but I was pretty convinced that there was a business case for it by these two points:

"A lot of people hid. There were all these employees [working remotely] and nobody knew they were still at Yahoo."

Mayer is happy to give Yahoo employees standard Silicon Valley benefits like free food and free smartphones. But our source says the kinds of work-from-home arrangements popular at Yahoo were not common to other Valley companies like Google or Facebook. "This is a collaborative businesses."


The first sounds like a very tough HR problem to solve without bringing everyone in so that they can be integrated into the business.  The second is an excellent business case for doing so as a way of enhancing productivity.  The piece that is always hard about working from home arrangements is that people are very defensive about them.  And they should be -- it is a very nice benefit.  But it also makes it a lot harder to collaborate with others and to manage reports. 

The framing of it as a blow to working parents seems to be a bit odd to me.  I would be much more concerned about parents who are working in low wage employment situations without the resources to access things like daycare. 

Health care costs: a never-ending saga

There is a fascinating piece on health care costs in America that has been getting a lot of attention. There have been a lot of responses but I want to discuss one of the unhelpful ones.   In it, the argument is:

Writ large, this is how health care billing works.  Insurers won't let them charge for one thing--nursing care, say--so instead they bill you for the Tylenol at $20 per.  Or they bill someone else, someone who pays more.  Or they double what they charge uninsured patients in the expectation that the bill will, for most people, eventually be written down by some large percentage.


The web of cross-subsidies, underpayments, overpayments, and upcoding is opaque to everyone except the adminstrators.  And they are not, as Brill observes, eager to make any of it clearer.  In part because they genuinely feel, as does the middle manager in our story, that they are forced into these little subterfuges to recover legitimate costs that short-sighted bean-counters are refusing to cover.
I think this is a completely unhelpful perspective.  It basically argues that the issue in health care costs is over regulation, ironically by other private sector actors.  However, it ignores the general issues of lack of bargaining power in an emergency room and opaque information.  Generally, we do not see a ton of hidden costs and a high pressure sales environment to be idea in other industries.

This is even more clear when the alternative is rate setting, by some sort of regulatory agency.  This is the normal approach in high income countries and it seems to lack a massive downside.  Sure, it reduces spending on medicine which can reduce the incentive for innovation in some cases.  But it can also increase the incentives in others -- when you can't just shift the costs for poor process control to customers then you might improve the process. 

So price transparency would seem to be the minimum level of reform . . . 

Sunday, February 24, 2013

Value added testing

This article is worth reading throughout.  However, the most relevant passage is:

Because student performance on the state ELA and math tests is used to calculate scores on the Teacher Data Reports, the tests are high-stakes for teachers; and because New York City uses a similar statistical strategy to rank schools, they are high-stakes for schools as well. But the tests are not high-stakes for the eighth-graders at Anderson. By the time they take the eighth-grade tests in the spring of the year, they already know which high school they will be attending, and their scores on the test have no consequences.
Seriously, can no student of incentives not see how this could go terribly, terribly wrong?  The students could, for example, decide to blow the test because the teacher was overly rigorous. 

This is why I am less skeptical about metrics like the SAT.  It is still flawed as all observational research is tricky to derived unbalanced estimates from.  But in this case, the students, the teacher and the school all have something at stake in their performance on the exam. 

They're just not paying Gelman's preferred currency -- the odd economics of access

This post from Andrew Gelman on a truly bad graph and this story from NPR had a common element that caught my eye, partly because I've been thinking about the role connections play in careers and economic mobility.

Here's the relevant passage from Gelman:
All ugliness aside, this reminds me of a story that I haven’t had a chance to share . . . until now. The bit on the above infographic about “C-level execs” reminds me of something that happened a couple years ago, when I was invited to speak at an event, “called “The 2012 Election: Predicted Outcomes and Implications for Your Business,” for “an association of C-suite executives.” I responded that I was interested and told them my fee, to which they responded:

"I am glad to hear that you are interested in this opportunity. However, we don’t compensate speakers as we find that most are interested in the opportunity to be in front of a room full of high quality C-suite executives."

Is that for real? I speak to all sorts of people for free (for example, I just spoke last month at a datatviz meetup), but I was surprised to hear that an association of business executives weren’t planning to pay. So I declined. I say this not to imply that I’m some sort of anti-corporate crusader (after all, I would’ve been happy to talk for pay!) but to express my bafflement at the whole “C-level executive” thing. What’s going on with that?
And here's the opening of the NPR story:
CNBC is far and away the television ratings leader in the financial cable news business. Now, evidence arrives that its executives, producers and reporters are going to great lengths to maintain its status.

The channel has adopted a policy that prohibits guests from appearing on rival channels amid breaking news if they want to be seen by CNBC's larger audience.
In both cases, it seems that the party providing the service is, in a sense, paying the party receiving it. The Chief ____ Officers would receive actionable business intelligence; CNBC would receive valuable on-air talent. You would expect the recipients to try to compensate for these services either directly on indirectly. Instead the company in the first anecdote insists on getting for free what would normally be a well compensated service while CNBC actually puts additional demands and restrictions on the people providing the free service.

Of course, that's not what's going on, at least not most of the time. While Gelman really was a researcher who was offering to present findings of potential value to a company, most similar presentations are actually sales pitches. I'm going to be a bit vague here because I'm currently working in the corporate world, but without naming names, tremendous amounts of capital is drained from American businesses every year by researchers with a slick HBR paper, an inspirational story and a new set of buzz-words.

If you can get a C-level executive interested in your product you can charge obscene amounts of money for years. Better yet, the metrics for success are ill-defined to nonexistent. The only real barrier to becoming the next Tom Peters is access to someone at or near the top of a company. It's easy to understand why companies think they can get research presented for free.*

Something similar is going on in the CNBC example. The majority of the people interviewed on CNBC are either selling something -- a book, some stocks, themselves -- or are building and maintaining a brand. Doing these things requires getting access to large numbers of people. You could manage this by buying advertising (most effectively on a targeted platform like CNBC) but ads are expensive and, even before TiVo they were often avoided either through channel surfing or, before the remote, simply leaving the room for a snack or a bathroom break.

Getting your access as part of regular programming and having it labeled as news is a far better way of doing things. You can why the interviewees would put up with restrictions.

This does, of course, raise questions about conflict of interest and journalistic integrity, but that's a topic for another post.



* This still leaves unexplained why executives would want to, as Gelman points out in this comment, waste their time with inferior speakers to avoid paying a relatively trivial amount of money. Thoughts on that will have to wait for an upcoming post.

UPDATE: This comment from Felix Salmon throws some amusing light on an event similar to the one in Gelman's anecdote.


Friday, February 22, 2013

Education Research

A strong opinion:

I don’t have an informed opinion on the standard of observational research in education, but if the standard is high, there can’t still be lots of low-hanging fruit in the form of cost-neutral interventions whose benefit is obvious without comparative evaluation.
I think that this position is on the strong side given the long lags between exposure (i.e. education) and outcomes (i.e. adult competencies).  But it does point out that easy and cheap interventions that are massively easy to measure are unlikely if there has been any sort of careful research.  So perhaps the randomization idea isn't all bad?

Thursday, February 21, 2013

"More colleges stop giving credit for AP exams"

I have to admit mixed feelings about this (from Marketplace):
Yet despite all the hard work, students like Brown may not be able to place out of required college courses or even skip freshman year if they score well on the AP tests. Some prestigious colleges have stopped giving academic credit for AP tests scores.

Brown doesn’t. Columbia doesn’t, and most recently, Dartmouth said it won’t let AP students skip ahead.

“We want a Dartmouth education to take place at Dartmouth,” says school spokesman Justin Anderson.
...

Conley says prestigious schools can afford to be picky about what credits to accept. But there are “more general admissions schools where they want students to bring AP credits and they do want to reward them for doing that.” In other words, AP credit is like bait for the best students.
For various reason, I've always preferred CLEP to AP as a method of testing out and as for advanced classes, I tend to favor plans that actually allow students to attend college classes while in high school, but of the choices we normally see, AP is certainly better than many and I very much support the idea of testing out of lower level courses.

I'm sure there are some exceptionally good freshman level courses at Brown, Columbia and company but that section of Cal I would have to be pretty impressive to justify the expense, the time and the opportunity costs (I sure there's an even better course the student could use to fill that time).

Wednesday, February 20, 2013

"Cosma Shalizi - Why Economics Needs Data Mining"

As part of a comment thread to this post on big data, Mark Thoma sends us here:



UPDATE:

I should have mentioned this before, but, while the entire segment is worthwhile, I was particularly glad to hear the interviewer address the strange discrepancy between the way economists use "data mining"  and the way statisticians, computer scientists and most people in business use term.



Tuesday, February 19, 2013

Interesting stats blog

From New Zealand. 

Education blogging of note

David Warsh has posted another entry in a thoughtful series about early childhood development. He also includes useful links including one to this interesting interview with James Heckman,

Monday, February 18, 2013

Yeah, I'm getting tired of these too

Once again I made the mistake of listening to On the Media. It started out reasonably benign with a standard but harmless piece on cyber-security, then went downhill with a smug piece on fact-checking before going off a cliff with this ddulite puff piece. Here's the blurb:
Late last month, The Washington Post debuted "The Truthteller," an application that it hopes will soon be able to fact-check politicians' speeches in real time using speech-to-text technology and a vast database of facts. Brooke talks to Cory Haik, The Washington Post's executive producer for digital news, about the app.
Of course, the speech-to-text and database problems are trivial next to the issues with processing natural language. To work at anywhere near the level discussed by Haik, the system would have to be considerably more advanced than IBM's Watson. Watson was designed to address short, free-standing questions following similar linguistic conventions and having clear, unambiguous answers.

This isn't meant to denigrate the team that developed Watson. Just the opposite. Interpreting natural language is extraordinarily difficult and solving even highly constrained problems is an impressive and important accomplishment. IBM has a lot to brag about.

The Washington Post currently has a beta up that apparently can sometimes spot strings that look like simple factual statements that lend themselves to automated comparison to a database. There's no reason to believe the app will ever move much beyond that, but the interviewer did believe...

without sign of suspicion...

immediately after a segment boasting about how carefully On the Media checks its facts.

To quote Snoopy, "the mind reels with sarcastic replies."

Sunday, February 17, 2013

When does original programming make business sense? part 1 -- cable

The big PR push (and credulous coverage) of Netflix's House of Cards has got me thinking about the necessary conditions for successfully launching an original scripted series. Before we get to the business case for Netflix, here are some thoughts on the current cable landscape.

When it comes to producing original content, the big players tend to fall into three groups (with one notable exceptions):

HBO
Showtime
USA
FX
TNT
Nick
Cartoon
AMC

Let's start slicing. Nick and Cartoon are certainly successful (SpongeBob is, by some standards, the most popular cable show ever) but animation and children's programming play by a very different set of rules. Production costs are extremely low and high audience turnover means that you can keep running the same few episodes for years. Furthermore, since children's programming works best in dedicated blocks, it's difficult for general interest channels to fully capitalize on this hugely profitable market. (Might throw Disney in here as well.)

Then there are the pay channels. Once again a very special business model. From a revenue standpoint, someone who watches five hours a month of Showtime counts the same as someone who watches fifty. What matters is loyalty. If you can bring in people with movies and specials, then get them sufficiently caught up in, say, Dexter, you don't have to worry about Starz stealing them away with better movies and specials. Better yet, if you can get fans of an older show like Dexter also hooked on a new show like Homeland, you can keep those monthly checks coming in for a long time.

This model does not work if your revenue comes from advertising. Having one show that a viewer never misses isn't nearly as valuable as having a bunch of shows that a viewer watches occasionally. This makes it extremely difficult for an expensive original cable show to pay for itself (particularly with a thirteen episode season). In this environment, viable shows have to add synergistic value. This brings us to the USA, FX and TNT, big flagship channels of ginormous media companies. These channels have the deep pockets, extensive synergies and long time frames that make it easier to make the business case for a Burn Notice or a Justified.

Which leaves AMC as the exception, the channel that would seem to suggest that anybody with good enough shows can make a go of original programming, but before an executive at AE points at AMC when greenlighting their next original drama, there are a couple of cautionary notes to keep in mind:

1. It's dangerous using outliers as examples

AMC got extraordinarily lucky twice, first in finding executives who could spot good shows and second in having good shows cross their desks. This is not something you can plan on in advance. In fact, it's something most companies fail at when they try.

2. (and this is the big one) It's not entirely clear that AMC is actually making money on this

It would take a good financial analyst and a lot of access to answer this definitively, but certainly looks like the network loses serious money on Mad Men and has been forced to make big and risky budget cuts to the show that is making them money, the ratings hit Walking Dead. Keep in mind, AMC has been both good and lucky. The odds against duplicating its success are slight and that success may not be all that, you know, successful.

These days, every cable channel seems to have an original scripted series in the works. From the executives' standpoint, this is easy to understand; these shows are the best way for a network to get prestige and media attention and who wouldn't want to be the VP who greenlit the next Breaking Bad? From the standpoint of the investor, though, if the channel isn't a TNT or an HBO or a Cartoon Channel, you might prefer another strategy.

Saturday, February 16, 2013

Weekend cinema buff alert

Hulu has temporarily lifted the paywall around the Criterion Collection. If you're a movie lover, you should definitely drop by.

p.s. but be prepared for LOTS of commercials.

The best tech writing you'll read this week just might be an article about wheelbarrows



I may may not always show it but I'm a great fan of technology (it's tough love, but love nonetheless). What I am not a fan of is the way that we discuss technology. Many (maybe most) journalists on this beat are so besotted with the gee-whiz aspects that they have little time to think seriously about why are certain technologies successful, what demands they makes on infrastructure, and how they answer existing needs and create new ones. (Search on 'ddulites' for more on this).

Having journalists report on the subject of their infatuation is generally a bad idea, but in this case the damage goes beyond the inevitable annoying prose. Technology is important and when we can't discuss it intelligently we make bad decisions that end up holding progress back.

Look at Groupon. One of the most exciting and promising areas of research today is the study of social networks, but partly because business reporters did such a poor job covering the subject, this crappy gift-card company was able to convince investors that its creaky business model was 'social.' (and no, requiring minimum orders for a deal to go through wouldn't have qualified even if they hadn't set the threshold meaninglessly low). The money that went to Groupon was diverted from other investments, including businesses that actually used the properties and analyses of social networks in their business plan.

Perhaps it's not surprising that the best thought-out, most clear-eyed popular writing on technology I've seen recently would focus on the least glamorous type of technology imaginable (via DeLong).







Wednesday, February 13, 2013

More fun with charts

Daniel Kuehn and Joseph previously discussed this post by Megan McArdle entitled "Department of Awful Statistics: Income Inequality Edition." They both make good points, but I'd like to approach this from the angle of appropriate visualization. McArdle supports her thesis that the middle class is neither disappearing nor getting poorer with charts derived from census table H-17 which you can and really should download here (the best way to keep us all honest is to play along at home).

The trouble is they're bad graphs.





To the extent that statistics includes data visualization, this is definitely bad statistics. When trying to depict trends and relationships, you generally want to get as much of the pertinent information as possible into the same graph. You don't want to force the reader to jump around the page trying to estimate slopes and compare magnitudes, nor do you want to take a few snapshots when you can easily picture all the data.

There are lots of acceptable ways of laying out the data table H-17, but I'm going just going to go with the simplest (partly because I like simple and partly because I'm doing this on Openoffice). As with McArdle's graphs, the numbers are inflation-adjusted.



I'm not that comfortable with this data (for reasons I'll get to in a minute), but this does look fairly consistent with the hollowing out of the middle class with 35K-75K (the top two lines) dropping more or less steadily for decades. Also check out the more than fourfold increase of people making more than 150K,

The two main things that make me uncomfortable with the data are the start point (with falls close to at least a couple of inflection points) and, on a related note, the failure to account for the baby boom which was at the bottom of its earning power forty years ago and should be close to the maximum now.

As far as I can tell income distribution is not broken down by age in these tables (though I suspect the data are available on request). We can, however, answer the related question of what median income looks like when we control for age and extended over a longer interval. (Download table P8 from here)




You can see why I was nervous about starting in 1967.

The question of income inequality and what's happening to the middle class is a complicated one and is probably best addressed by people who know what they're talking about, but if you are going to try to argue one side of the case graphically, you should at least take the time to use appropriate graphs.

p.s. I picked 35-44 because it seemed like a good representative mid-career interval and because, since I wasn't comparing different age groups, an uncluttered one-line graph seemed sufficient. If you prefer, here's the multi-range version (though I don't know if it adds much information).







Tuesday, February 12, 2013

Thinking about failure and collective amnesia.

No. Not in the sad my-life-adds-up-to-nothing way, but more in the case study sense. I've been noticing how often optimistic analyses of proposed products and business models echo the same arguments used over the years for various underperforming enterprises and catastrophic failures, invariably without a flicker of recognition.

No doubt, this is partly due to a general lack of long-term memory in the pundit class, but the problem seem particularly acute when it comes to failure. There are exceptions like this well-thought-out analogy by Josh Marshall or this piece of historical context for Zucker's Leno debacle from Kliph Nesteroff, but as a rule, most journalists don't pay nearly enough attention to these counterexamples (which makes it all the more difficult to avoid repeating mistakes).

I'll try to add some more entries and drill down into some of the specifics, but in the meantime, here's a short list of some potentially useful examples ideas that seemed (and in some cases, actually were) good ideas at the time.

The aforementioned mentioned attempt to make Jerry Lewis king of the talk shows.

A late Eighties format that doubled the resolution of video tapes while being completely compatible with standard VHS.

An attempt to break the Seventies DC/Marvel duopoly.

An attempt to break the Coke/Pepsi duopoly (perhaps breaking duopolies deserves a subcategory).

Adios, Amiga.

Friday, February 8, 2013

Fun with charts

This post by Daniel Kuehn is worth reading, although all of the action is in the comments. 

I think he is right on about the denomintor problem in interpreting her graphs.  It's also a very good example of when a point can be correct and yet not explain all of the differences (her comments about rounding).  However, the labeled buldge seems to be a lesser sin than variable bracket sizes on a density plot.

As for the change argument, it is fine to use a chart to explain something and then talk about the expected changes to the distribution.  Where I am less happy is that there are changes going on in the United States all of the time (aging of the population, propensity to form a new household) that are going to influence the shape of this curve.  It is possible to imagine the curve shifting exactly as Jon Evans suggested, and the reasons having to do with factors that have nothigng to do with inequality.

But standardized curves have their own issues . . .  So even Magan's use of the curve to show the shifts over time doesn't address the null conditional on the changes in the underlying population.  This may even help her argument, I am not sure, but certainly I would rather graph density plots in equal sized segments just for reader clarity

Still, a worthwhile argument to follow and it is useful insofar as it improves understanding of what the plots do and do not mean. 

Wednesday, February 6, 2013

Annals of bad analogies


From here:
Think about it this way. Say your elderly mother had to be hospitalized for life-threatening cancer. The best doctor in the region is at Sacred Heart, a Catholic, private hospital. Could you ever imagine saying this? “Well, I don’t think our taxpayer dollars should subsidize this private institution that has religious roots, so we’re going to take her to County General, where she’ll get inferior care. ’Cause that’s just the right thing to do!”
No. You’d want to make sure that your tax dollars got your mom the best care. Period. Our approach should be no different for our children. Their lives are at stake when we’re talking about the quality of education they are receiving. The quality of care standard should certainly be no lower.
An analogy is the weakest form of argument, because it presumes similarities between cases.  In this case we are equating a one time event (cancer treatment) with a long term process (educating people).  There is also a difference in that cancer outcomes are much easier to measure (due to the fast time between diagnosis and resolution) than an educational process.  So "better" is much easier to evaluate.  Finally, it ignores magnitudes.  What is "better" and by how much.  Is it a matter of preference (Starbucks coffee is better than McDonald's coffee) or an objective metric? 


But this whole thing dodges the main question-- why is the County General hospital not competitive with the Sacred Heart hospital?  Is that not the more interesting question?  Is it because the County General can't turn patients away and so gets the sickest of the sick? 

These points matter.   

Monday, February 4, 2013

Imbalance in the infrastructure debate

Joseph's previous post builds on this thread from Mark Thoma. Each is worth reading but I think both understate the extraordinary asymmetry between the pro and con in the should-we-spend-on-infrastructure debate (distinct from the where-to-spend debate). Consider the following statements:

1. We need to repair and upgrade the country's infrastructure in the relatively near future (let's say a decade)

So far as I can tell, almost no one is willing to stand up and argue against this point, which is strange because, though I don't happen to agree with them, there are reasonable arguments to be made here and, once this point has been conceded, the remaining ground is extraordinarily difficult to defend.

2. The economy is not operating at full capacity

We've already stipulated that we need to build these things which means we've also agreed to tolerate at least some crowding out at some point in the future. You simply can't have one without the other; you can only seek to minimize the effect. A crowding-out argument for delaying pretty much has to assume that there will be more slack in the economy far enough in the future to make waiting worth it (but not so far to extend past our decade window). I've heard lots of people making crowding-out arguments but none making the necessary corollary. (even if you believe that crowding out is unaffected by economic conditions, you still don't have any argument for waiting)

3. Borrowing costs for the federal government are historically low.

As a general rule, repairs don't get cheaper the longer you put them off. This tends to put the burden of proof on those arguing for a delay. If we were living in a period of historically high borrowing costs, you could argue that rates were likely to head back down if we waited. There are reasonable cost-based arguments against infrastructure spending, but only in the spend/don't spend context, not spend now/spend later.

The infrastructure debate is another example of how the public discourse has entered a phase reminiscent of Carroll's Tortoise/Achilles tale, where showing the premise is true and showing the premise leads to a conclusion is not sufficient to make the other side accept that conclusion. Of course, it's not an exact analogy. Carroll was making a point about the limits of logical system. What we're seeing here is more probably a demonstration of people's willingness to ignore the rules of argument when those rules lead to an uncomfortable policy position.

Friday, February 1, 2013

Sometimes you do it because it is a good idea anyway

Mark Thoma looks back five years to reports of skepticism about infrastructure development.  Part of this is that maybe we just need to relax our rules on project timelines a wee bit.  But another piece of it is that the worst mistake we can have to to end up competing with the private sector for labor to make cool things that we will end up needing.  As disasters go, this one is rather mild. 

Thursday, January 31, 2013

Shamisen heroes and free TV





I had been working on another piece about over the air television when I happened to surf across this video on one of the many Asian-themed channels you can get with an antenna in LA and it struck me as an appropriate accompaniment for a quick note about over the air TV.

With a pair of rabbit ears I pick up programming in at least a half dozen languages. That's an indication of the diversity of the medium and its importance to some underserved segments of the population, but it also represents a real competitive weakness. Terrestrial television suffers from a crippling lack of attention. Both Journalists who cover both media and personal finance are almost completely oblivious to this innovative, totally free source of programming.

If you're a medium trying to get the attention of the mainstream media, having a large part of your viewership consist of recent immigrants is not going  to help.

I'll leave you with something a bit more traditional from the Yoshida brothers, though still, well... Hell, just watch it.