I was starting to write a post about an old but nicely done 70s movie of the week (Ed Asner, Cloris Leachman and Lloyd Bridges all doing good work but -- God help me -- it's Robert Reed who knocks it out of the park in his big scene). I was specifically interested in the odd way the writer handled the big reveal, having characters start to speculate about it in the middle of the film.
I started to contrast it with a Henry Fonda film that ends with a with a masterfully executed blindside twist but I realized that simply by saying ____ has a great surprise ending I could spoil the film. You might not guess the ending but knowing something was coming could keep you from buying into the story. When the rug was pulled out you'd already have one foot off of it.
Likewise, an Ira Levin novel I was also going to use as an example derives much of it effectiveness from the skillful way the writer slips something very big past you without putting you on your guard, If I tell you Levin pulls a fantastic narrative trick in ____, there's a much better chance that he won't pull it off.
The idea that simply telling someone that there's a spoiler can be a spoiler reminded me of a class of puzzles where before you can solve the puzzle you have to know if you have enough information to solve the puzzle. These are discussed at length by Raymond Smullyan in What Is the Name of This Book. I'll try to dig up my copy and update the post with some examples.
Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Sunday, March 24, 2013
Friday, March 22, 2013
75 years of progress
While pulling together some material for a MOOC thread, I came across these two passages that illustrated how old much of today's cutting edge educational thinking really is.
First from a 1938 book on education*:
* If you have any interest in the MOOC debate, you really ought to read this Wikipedia article on Distance Learning.
First from a 1938 book on education*:
" Experts in given fields broadcast lessons for pupils within the many schoolrooms of the public school system, asking questions, suggesting readings, making assignments, and conducting test. This mechanize is education and leaves the local teacher only the tasks of preparing for the broadcast and keeping order in the classroom."And this recent entry from Thomas Friedman.
For relatively little money, the U.S. could rent space in an Egyptian village, install two dozen computers and high-speed satellite Internet access, hire a local teacher as a facilitator, and invite in any Egyptian who wanted to take online courses with the best professors in the world, subtitled in Arabic.I know I've made this point before, but there are a lot of relevant precedents to the MOOCs, and we would have a more productive discussion (and be better protected against false starts and hucksters) if people like Friedman would take some time to study up on the history of the subject before writing their next column.
* If you have any interest in the MOOC debate, you really ought to read this Wikipedia article on Distance Learning.
Wow, just wow!
Via Thomas Lumley:
You may have heard about the famous Hawthorne experiment, where raising light levels in a factory improved output, as did lowering them, as did anything else experimental. The original data have been found and this turns out not to be the case.The mind boggles at just how often I have used this example and how wrong it was. I have read the paper once, not that closely, but the overall impression I have is that Levitt is correct here.
Thursday, March 21, 2013
Cloud computing
Kevin Drum
What's different is that Google's products are all cloud-based. When Google Reader goes away on July 1, that's it. It's gone. If it were an ordinary bit of software that I'd installed on my PC, this wouldn't be a problem. It would keep on working for years even if it never got another update. I'd need to replace it eventually—because of an OS upgrade or a desire for new features that finally got too strong—but I'd probably have years to work that out.I think that this element of the new model of software is worth a lot more attention then it is getting. Just look at the sim city fiasco and ask what would happen if Microsoft made the same mistakes with a new cloud version of Office. Now Microsoft is an extraordinarily well run company, so the chances of that are quite small.
But the general trend towards renting access is going to make interuptions of service (or just the internet misbehaving) a much bigger deal.
Wednesday, March 20, 2013
Apple, J.C. Penney* and fitness landscapes in everything
James Kwak has an excellent piece on Ron Johnson's unfortunate run as CEO of J.C. Penney.
Let's try thinking in terms of the retail fitness landscape (presented with the usual caveat that I'm way out of my field). Just how distant is the Apple Store from J.C.P.?
Apple Stores are a relatively small boutique chain (400 stores total, 250 in the U.S.) concentrated heavily in prime commercial urban and upscale suburban areas. Their customer demographics tend toward upper income, fashion-conscious early adopters with a demonstrated willingness to pay a premium for quality. Inventories consist of a few heavily-branded, high-quality, high mark-up items, all of which come from one very visible supplier with an excellent reputation. This allows an unusual (perhaps unique -- there's not another Apple) symbiotic relationship. The stores give the supplier a presence and a profit center while the stores benefit from the supplier's powerful brand, large advertising budget and unparalleled PR operation.
In terms of customers, products, brand, retail space, vendors relations, logistics, scale and business model, moving from the Apple Store to JCP was a shift to a distant part of the retail landscape. What Johnson did, in essence, was say "these coordinates are associated with an extremely high point on the landscape (the Apple Store). Even though we've made large shifts in many of these dimensions, we can keep the same coordinates for the other dimensions and we'll find another optima."
To put this in context, here's a useful example from T. Grandon Gill
How do we know the retail landscape is rugged? We don't, but we do have considerable evidence that certain approaches work better in some circumstances than they do in others (i.e. there are multiple local optima). More to the point, Johnson's entire strategy pretty much assumed that the many small and large players in the department store area (including Macy's, Sear, Dillards, Kohls, the pre-Johnson JCP and countless smaller chains and individual stores) were trapped in one or more low-quality optima. When you have this many diverse companies in a market this competitive and this mature, you expect to see a fair amount of something analogous to gradient searching ("That worked; let's do more to it."). If they haven't settled on your optimum point, it's almost certainly because they settled on another.
The lessons -- when you move into an established market you should probably assume the established players know the field and you should probably not assume that what worked somewhere else will work here -- could be (and were) reached without referring to fitness landscapes, but they do make a good framework for approaching a wide variety of problems.
Johnson moved to an unfamiliar region of a probably rugged landscape and refused to explore the surrounding area for higher points despite the fact that numerous other players that had explored the region had settled on a completely different points. When you phrase it this way, it doesn't sound good (of course, Johnson's approach doesn't sound good when you phrase it most ways).
* The 'C' stands for 'Cash' -- no, really.
According to today’s Wall Street Journal article, Johnson quickly eliminated coupons and most sales at J.C. Penney.The obvious interpretation here is as a cautionary tale of executive hubris, but you can also look at it in terms of fitness landscapes (the following will be fairly straightforward, but if the concept doesn't ring a bell you might want to check here, here, and of course, here).
“Johnson bristled when a colleague suggested that he test his new no-discounts strategy at a few stores. . . . ‘We didn’t test at Apple,’ the executive recalled Mr. Johnson . . . saying.”
Well, yeah. Apple doesn’t discount because they sell stuff that people really, really want and that they can’t get anyplace else. And they don’t test because Steve Jobs refused to. At Penney? Sales have fallen by about 30 percent.
This doesn’t mean Johnson is stupid, or that he’s going to fail as CEO. Apparently he has partially reversed his early decision, which is a good sign. But it brings up a common feature of external CEO hires. Companies in a perceived crisis often look outside for a new leader, hoping for a superman (or -woman) who can singlehandedly turn around the organization. Not completely illogically, they tend to look for people at successful companies. “Make us more like X,” they pray. In Penney’s case, X = Apple.
There are two important questions they tend not to ask, however. First, was Apple successful because of Johnson, or was he just along for the ride? Yes, he was the main man behind the Apple Store (although, according to Walter Isaacson’s book, Steve Jobs was really the genius behind everything). But was the success of the Apple Store just a consequence of the success of the iPhone?
Second, even if Johnson was a major contributor to Apple’s success, how much of his abilities are transferable to and relevant to J.C. Penney? There’s a big difference between selling the most lusted-after products on the planet and selling commodities in second-rate malls. When someone has been successful in one context, how much information does that really give you about how he will perform in a new environment?
Let's try thinking in terms of the retail fitness landscape (presented with the usual caveat that I'm way out of my field). Just how distant is the Apple Store from J.C.P.?
Apple Stores are a relatively small boutique chain (400 stores total, 250 in the U.S.) concentrated heavily in prime commercial urban and upscale suburban areas. Their customer demographics tend toward upper income, fashion-conscious early adopters with a demonstrated willingness to pay a premium for quality. Inventories consist of a few heavily-branded, high-quality, high mark-up items, all of which come from one very visible supplier with an excellent reputation. This allows an unusual (perhaps unique -- there's not another Apple) symbiotic relationship. The stores give the supplier a presence and a profit center while the stores benefit from the supplier's powerful brand, large advertising budget and unparalleled PR operation.
In terms of customers, products, brand, retail space, vendors relations, logistics, scale and business model, moving from the Apple Store to JCP was a shift to a distant part of the retail landscape. What Johnson did, in essence, was say "these coordinates are associated with an extremely high point on the landscape (the Apple Store). Even though we've made large shifts in many of these dimensions, we can keep the same coordinates for the other dimensions and we'll find another optima."
To put this in context, here's a useful example from T. Grandon Gill
Suppose, for example, you had a fitness function that mapped the list of ingredients to an objectively determined measure of “taste fitness” for all the recipes in a cookbook. If you were to do a regression on taste (dependent variable) using the ingredients (independent variables), you might find—for instance—that garlic shows a high positive significance. What would that tell you (other than, possibly, that the individuals rating the recipes enjoyed garlic)? What it would definitely not tell you is that you could improve your recipe for angel cake by adding garlic to it. Indeed, the whole notion of applying a technique that assumes linear decomposability to a fitness landscape that is so obviously not decomposable is preposterous.Substitute a low level of coupons for a high level of garlic and you have a pretty good picture of the JCP strategy.
How do we know the retail landscape is rugged? We don't, but we do have considerable evidence that certain approaches work better in some circumstances than they do in others (i.e. there are multiple local optima). More to the point, Johnson's entire strategy pretty much assumed that the many small and large players in the department store area (including Macy's, Sear, Dillards, Kohls, the pre-Johnson JCP and countless smaller chains and individual stores) were trapped in one or more low-quality optima. When you have this many diverse companies in a market this competitive and this mature, you expect to see a fair amount of something analogous to gradient searching ("That worked; let's do more to it."). If they haven't settled on your optimum point, it's almost certainly because they settled on another.
The lessons -- when you move into an established market you should probably assume the established players know the field and you should probably not assume that what worked somewhere else will work here -- could be (and were) reached without referring to fitness landscapes, but they do make a good framework for approaching a wide variety of problems.
Johnson moved to an unfamiliar region of a probably rugged landscape and refused to explore the surrounding area for higher points despite the fact that numerous other players that had explored the region had settled on a completely different points. When you phrase it this way, it doesn't sound good (of course, Johnson's approach doesn't sound good when you phrase it most ways).
* The 'C' stands for 'Cash' -- no, really.
Tuesday, March 19, 2013
If you actually want to close the achievement gap...
An excellent story from KPCC's Take Two. Considerably more effective in the audio version if you have the time.
Teenager Michelle Zamora has big dreams to become a civil engineer.
“Since 4th grade,” Zamora says, “I told myself I want to go to Stanford University.”
Zamora would be the first in her family to go to college, and as a self-described “smart kid,” Stanford never seemed too far-fetched an idea.
But at age 15, Michelle Zamora made a mistake: she got pregnant. And her dreams of college seemed to vanish.
Like thousands of other California teens, Zamora dropped out of high school.
She is among the majority of the state's teen moms --83%-- that come from low-income households. According to the California Department of Education, the state ranks number one nationwide with its rate of pregnancy among teens.
The worst part, she said, was the way most people assumed she was condemned to future that she didn’t want. People told her “well, you’re just going to be another teenager on welfare,” or “you’re not going to make it.” Zamora started to believe them.
And then she found out about a program in Baldwin Park that has given her renewed hope.
In the late 1990s, officials in the Baldwin Park Unified School District worried that they were losing too many students due to pregnancy. Using federal Early Head Start funds, the district launched an innovative program to ensure teen moms could stay in school.
When Zamora’s daughter was born in 2011, a friend told her about North Park high school which provides on-site daycare so teen moms and dads can complete coursework.
A Continuation high school, North Park enrolls students who failed or dropped out, but now want to finish high school. Its child care program is one of 18 at high schools across Los Angeles county that cater to teen parents. Since 1999, about 60% of North Park students have graduated and gone on to higher education.
Monday, March 18, 2013
Today's vocabulary term is "flack-to-hack ratio"
Felix Salmon has one of those that-explains-a-lot posts up on his blog:
Quartz, in this deal, is getting one article, which needs a fair amount of editing; it’s a tiny proportion of Quartz’s daily output. Meanwhile, Brandtone is getting something very valuable indeed. Just look at the US flack-to-hack ratio: it’s approaching 9:1, according to the Economist, which means that for every professional journalist, there are nine people, some of them extremely well paid, trying to persuade that journalist to publish something about a certain company. That wouldn’t be the case if those articles weren’t worth serious money to the companies in question.The flack-to-hack ratio may have something to do with another recurring topic, the almost complete lack of coverage of the reemergence of over-the-air television (see here, here, here, here, and... hell, just do a search). Weigel Broadcasting may be an extraordinarily well run company, but as long as they run a largely flackless operation, you'll probably never hear about them.
How valuable? How about somewhere between $250,000 and $1 million? That’s the amount of money that Fortune’s ad-sales team was asking, earlier this month, for a new product called Fortune Trusted Original Content:
Similar to licensed editorial content, TOC involves creating original, Fortune-branded editorial content (articles, video, newsletters) exclusively for marketers to distribute on their own platforms.
After news of the TOC program appeared, it was walked back — abolished, essentially. You can see why Fortune’s top editorial brass would be uncomfortable with the idea that Fortune editorial content could be commissioned by, and appear for the sole benefit of, advertisers. So now they’re going back to the old model, of just allowing advertisers to license (reprint, basically) stories which were independently commissioned and published by Fortune’s editors.
Still, the price point on the now-aborted TOC program is revealing. The cost of the content, from a “trusted freelancer”, would probably not be much more than a couple of thousand dollars — but the cost of the content to the advertiser could be as much as $1 million. The difference is entirely accounted for by the value of the Fortune brand.
I apologize if I posted this before...
... but this Marketplace piece on a program to get disadvantaged families out of bad neighborhoods is definitely worth checking out.
Sunday, March 17, 2013
Weekend movie blogging -- Herman Mankiewicz in Oz
With Oz, the Great and Powerful being both at the box office, it's worth taking a minute to give a little credit to someone who made an essential but largely unrecognized contribution to the original classic, Herman Mankiewicz. Famed film historian/director/sycophant, Peter Bogdanovich has spent the past few decades trying to undermine Mankiewiez's reputation since Pauline Kael had the temerity to suggest that Mankiewicz was the primary author of the script of Citizen Kane.
Bogdanovich has sold the "Herman Mankiewicz was a talented hack" line to countless credulous journalists and film students over the years and supported the claim with a highly selective recounting of Mankiewiez's resume. With Oz back in such a big way, one of the films Bogdanovich omits is particularly relevant:
Bogdanovich has sold the "Herman Mankiewicz was a talented hack" line to countless credulous journalists and film students over the years and supported the claim with a highly selective recounting of Mankiewiez's resume. With Oz back in such a big way, one of the films Bogdanovich omits is particularly relevant:
In February, 1938, he was assigned as the first of ten screenwriters to work on The Wizard of Oz. Three days after he started writing he handed in a seventeen-page treatment of what was later known as "the Kansas sequence". While Baum devoted less than a thousand words in his book to Kansas, Mankiewicz almost balanced the attention on Kansas to the section about Oz. He felt it was necessary to have the audience relate to Dorothy in a real world before transporting her to a magic one. By the end of the week he had finished writing fifty-six pages of the script and included instructions to film the scenes in Kansas in black and white. His goal, according to film historian Aljean Harmetz, was to "to capture in pictures what Baum had captured in words--the grey lifelessness of Kansas contrasted with the visual richness of Oz." He was not credited for his work on the film, however.There are, of course, many things that have to go right to produce a truly iconic film, but if you had to pick the one element that made the film work and made people remember it, you'd probably have to go with Mankiewicz's contribution.
Saturday, March 16, 2013
Playing with paper over at You Do the Math
Been thinking a lot about paper from a material-science standpoint (the mind wanders when SAS runs slowly), specifically about using paper to teach kids about the physical properties of different shapes and how we test them.
I've kicked off an ongoing thread on the subject at my math teaching blog. The first (with the admittedly pretentious title, "Reseeing Paper") is an overview of paper as a way of exploring shape. The second ("Paper Towers") lays out the rules for some engineering projects/competitions suitable (almost without alteration) for classes ranging from fourth grade to freshman engineering (though one would like to think that the results for the freshmen would be a bit more sophisticated). The projects would also be suitable for science-based family activities. There is more of this to come (I haven't even started on corrugation).
Check it out and if you know of any teachers who are looking for new (and cheap) manipulatives, please send them the links. There are any number of potential lesson plans here.
Thanks,
Mark
p.s. Suggestions are always welcome.
I've kicked off an ongoing thread on the subject at my math teaching blog. The first (with the admittedly pretentious title, "Reseeing Paper") is an overview of paper as a way of exploring shape. The second ("Paper Towers") lays out the rules for some engineering projects/competitions suitable (almost without alteration) for classes ranging from fourth grade to freshman engineering (though one would like to think that the results for the freshmen would be a bit more sophisticated). The projects would also be suitable for science-based family activities. There is more of this to come (I haven't even started on corrugation).
Check it out and if you know of any teachers who are looking for new (and cheap) manipulatives, please send them the links. There are any number of potential lesson plans here.
Thanks,
Mark
p.s. Suggestions are always welcome.
Friday, March 15, 2013
When you hear proposals to control tuition by reducing instruction cost...
When you hear proposals to control tuition by reducing instruction cost (in the recent discussions of MOOCs for example), here are a couple of numbers you should keep in mind. They haven't been adjusted for inflation and they're based on a sample size of one, but I still thing they'll gave you a pretty clear picture.
Back in the Nineties I did a four year stint as a lecturer at a large state university. The standard load for lecturers was four courses a semester and the pay was seventeen thousand and change. (I was only on a half load with the other half of my funding coming from other duties like supervising grad students but the following numbers still hold).
If you break it down that comes to less than twenty-five hundred a three hour course. With the exception of a couple of upper level courses, my sections generally ranged from twenty-five to one hundred and fifty students. That means that the per student cost associated with the lecture portion of one of those courses ranged from less than one hundred dollars at the top end to around fifteen dollars at the bottom.
If someone has some current numbers I'd be glad to update the post but as far as I can tell, while tuition has continued to climb since my lecturer days, adjunct salaries have, at best, kept up with inflation and certainly haven't grown enough to be a major driver of education costs. But what's really amazing isn't that you can get people to take these jobs at this pay; it's that you can find wildly overqualified people -- promising scholars, gifted lecturers -- willing to take these jobs. That's how flooded the supply of would-be professors is.
There are well-paid, even over-compensated professors out there but they are all paid primarily for something other than teaching, be it their research or their reputation (which reflects on the school) or the grants they pull in. We can and probably should have a serious discussion about these roles (maybe starting here) but that's a different conversation.
As for controlling tuition by reducing instructor costs, that conversation has to start with a realistic picture of how much people who are hired simply to teach actually make.
Back in the Nineties I did a four year stint as a lecturer at a large state university. The standard load for lecturers was four courses a semester and the pay was seventeen thousand and change. (I was only on a half load with the other half of my funding coming from other duties like supervising grad students but the following numbers still hold).
If you break it down that comes to less than twenty-five hundred a three hour course. With the exception of a couple of upper level courses, my sections generally ranged from twenty-five to one hundred and fifty students. That means that the per student cost associated with the lecture portion of one of those courses ranged from less than one hundred dollars at the top end to around fifteen dollars at the bottom.
If someone has some current numbers I'd be glad to update the post but as far as I can tell, while tuition has continued to climb since my lecturer days, adjunct salaries have, at best, kept up with inflation and certainly haven't grown enough to be a major driver of education costs. But what's really amazing isn't that you can get people to take these jobs at this pay; it's that you can find wildly overqualified people -- promising scholars, gifted lecturers -- willing to take these jobs. That's how flooded the supply of would-be professors is.
There are well-paid, even over-compensated professors out there but they are all paid primarily for something other than teaching, be it their research or their reputation (which reflects on the school) or the grants they pull in. We can and probably should have a serious discussion about these roles (maybe starting here) but that's a different conversation.
As for controlling tuition by reducing instructor costs, that conversation has to start with a realistic picture of how much people who are hired simply to teach actually make.
Thursday, March 14, 2013
The Rise of P-Value
In the middle of a recent conversation prompted by this post by Andrew Gelman, I struck me that I couldn't recall encountering the term p-value before I started studying statistics in the Nineties. These days you frequently run across the term in places like the NYT article Gelman cited or this piece in the Motley Fool; were they always there and I just missed them?
Fortunately we have Google's Ngram viewer to resolve these questions and apparently the answer is a qualified yes. While people were talking about p-values at the beginning of the decade, more people were talking about them by the end.
The question now is how much of that growth is attributable to general interest writing like the NYT.
The question now is how much of that growth is attributable to general interest writing like the NYT.
Wednesday, March 13, 2013
Epidemiology and Truth
This post by Thomas Lumley of Stats Chat is well worth reading and thinking carefully about. In particular, when talking about a study of process meats and mortality he opines:
I think that support of an association has to be the most misunderstood piece of Epidemiology (and we epidemiologists are not innocent of this mistake ourselves). The real issue is that cause is a very tricky animal. It can be the case that complex disease states have a multitude of "causes".
Consider a very simple (and utterly artificial) example. Let assume (no real science went into this example) that hypertension (high systolic blood pressure) occurs when multiple exposures over-whelms a person's ability to compensate for the insult. So if you have only one exposure off of the list then you are totally fine. If you have 2 or more then you see elevated blood pressure. Let's make the list simple: excessive salt intake, sedentary behavior, a high stress work environment, cigarette smoking, and obesity. Now some of these factors may be correlated, which is its own special problem.
But imagine how hard this would be to disentangle, using either epidemiological methods or personal experimentation. Imagine two people who work in a high stress job, one of which eats a lot of salt. They both start a fitness program due to borderline hypertension. One person sees the disease state vanish whereas the other sees little to no change. How do you know what was the important factor?
It's easy to look at differences in the exercise program; if you torture the data enough it will confess. At a population level, you would expect completely different results depending on how many of these factors the underlying population had. You'd expect, in the long run, to come to some sort of conclusion but it is unlikely that you'd ever stumble across this underlying model using associational techniques.
The argument continues:
Finally, there is the whole question of estimation. If we mean falsehood to be that the size of the average causal effect of intervention A on outcome B is completely unbiased then I submit that 90% is a very conservative estimate (given if you make truth an interval around the point estimate to the precision of the reported estimate given the oddly high number of decimal places people like to quote for fuzzy estimates).
But that last point kind of falls into the "true but trivial" category . . .
So, the claims in the results section are about observed differences in a particular data set, and presumably are true. The claim in the conclusion is that this ‘supports’ ‘an association’. If you interpret the conclusion as claiming there is definitive evidence of an effect of processed meat, you’re looking at the sort of claim that is claimed to be 90% wrong. Epidemiologists don’t interpret their literature this way, and since they are the audience they write for, their interpretation of what they mean should at least be considered seriously.
I think that support of an association has to be the most misunderstood piece of Epidemiology (and we epidemiologists are not innocent of this mistake ourselves). The real issue is that cause is a very tricky animal. It can be the case that complex disease states have a multitude of "causes".
Consider a very simple (and utterly artificial) example. Let assume (no real science went into this example) that hypertension (high systolic blood pressure) occurs when multiple exposures over-whelms a person's ability to compensate for the insult. So if you have only one exposure off of the list then you are totally fine. If you have 2 or more then you see elevated blood pressure. Let's make the list simple: excessive salt intake, sedentary behavior, a high stress work environment, cigarette smoking, and obesity. Now some of these factors may be correlated, which is its own special problem.
But imagine how hard this would be to disentangle, using either epidemiological methods or personal experimentation. Imagine two people who work in a high stress job, one of which eats a lot of salt. They both start a fitness program due to borderline hypertension. One person sees the disease state vanish whereas the other sees little to no change. How do you know what was the important factor?
It's easy to look at differences in the exercise program; if you torture the data enough it will confess. At a population level, you would expect completely different results depending on how many of these factors the underlying population had. You'd expect, in the long run, to come to some sort of conclusion but it is unlikely that you'd ever stumble across this underlying model using associational techniques.
The argument continues:
So, how good is the evidence that 90% of epidemiology results interpreted this way are false? It depends. The argument is that most hypotheses about effects are wrong, and that the standard for associations used in epidemiology is not a terribly strong filter, so that most hypotheses that survive the filter are still wrong. That’s reasonably as far as it goes. It does depend on taking studies in isolation. In this example there are both previous epidemiological studies and biochemical evidence to suggest that fat, salt, smoke, and nitrates from meat curing might all be harmful. In other papers the background evidence can vary from strongly in favor to strongly against, and this needs to be taken into account.This points out (correctly) the troubles in just determining an association between A and B. It's ignoring all of the terrible possibilities -- like A is a marker for something else and not the cause at all. Even a randomized trial will only tell you that A reduces B as an average causal effect in the source population under study. It will not tell you why A reduced B. We can make educated guesses, but we can also be quite wrong.
Finally, there is the whole question of estimation. If we mean falsehood to be that the size of the average causal effect of intervention A on outcome B is completely unbiased then I submit that 90% is a very conservative estimate (given if you make truth an interval around the point estimate to the precision of the reported estimate given the oddly high number of decimal places people like to quote for fuzzy estimates).
But that last point kind of falls into the "true but trivial" category . . .
Tuesday, March 12, 2013
Landscapes in everything
SLIGHTLY UPDATED
One of the issues I have with economics exceptionalism is the word 'everything,' as in "markets in everything" or "the hidden side of everything." Not that there's anything wrong with applying economic concepts to a wide variety of questions (I do it myself), but at some point they become overused and start crowding out ideas that are better in a given context.
Think about all the times you heard phrases like the 'marriage market' often followed by the implicit or explicit suggestion that the tools of economics hold the key to understanding all sorts of human behavior even in cases where the underlying assumptions of those tools probably don't apply. Now, for example, compare that to the number of times you've recently heard someone describe something as a fitness landscape when they weren't talking about evolution or physics (OK, that's not the term physicists generally use but the concept is basically the same).
Landscapes are a powerful and widely applicable concept, arguably more so than markets (they are also a long-time fascination of mine). Ideas like gradient searches, perturbation, annealing and, most of all, local optimization are tremendously useful, both to explain complex problems and to suggest approaches for solving them. Once you start thinking in those terms you can see landscapes about as often as Tyler Cowen sees markets.
You can even find researchers coming up with the kind of unexpected, everyday examples that you might expect in a Steven Levitt column.
My favorite recent example (at least recent to me) is T. Grandon Gill's observation that recipes in a cookbook are essentially the coordinates of local optima on a culinary fitness landscape where the amount of each ingredient are the dimensions and taste is the fitness function (technically we should add some dimensions for preparation and make some allowance for the subjectivity of taste, but I'm keeping things simple).
This is a great example of a rugged landscape that everyone can relate to. You can find any number of delicious recipes made with the same half dozen or so ingredients. As you start deviating from one recipe (moving away from a local optima), the results tend to get worse initially, even if you're moving toward a better recipe.
Approaching something as a rugged landscape can provide powerful insights and very useful tools, which leads to another concern about economic exceptionalism -- economics as a field tends to make little use of these models and many economists routinely make modeling assumptions that simply make no sense if the surface being modeled really is rugged.
I asked Noah Smith* about this and as part of his reply he explained:
And some of those techniques are damned useful.
* now with source.
One of the issues I have with economics exceptionalism is the word 'everything,' as in "markets in everything" or "the hidden side of everything." Not that there's anything wrong with applying economic concepts to a wide variety of questions (I do it myself), but at some point they become overused and start crowding out ideas that are better in a given context.
Think about all the times you heard phrases like the 'marriage market' often followed by the implicit or explicit suggestion that the tools of economics hold the key to understanding all sorts of human behavior even in cases where the underlying assumptions of those tools probably don't apply. Now, for example, compare that to the number of times you've recently heard someone describe something as a fitness landscape when they weren't talking about evolution or physics (OK, that's not the term physicists generally use but the concept is basically the same).
Landscapes are a powerful and widely applicable concept, arguably more so than markets (they are also a long-time fascination of mine). Ideas like gradient searches, perturbation, annealing and, most of all, local optimization are tremendously useful, both to explain complex problems and to suggest approaches for solving them. Once you start thinking in those terms you can see landscapes about as often as Tyler Cowen sees markets.
You can even find researchers coming up with the kind of unexpected, everyday examples that you might expect in a Steven Levitt column.
My favorite recent example (at least recent to me) is T. Grandon Gill's observation that recipes in a cookbook are essentially the coordinates of local optima on a culinary fitness landscape where the amount of each ingredient are the dimensions and taste is the fitness function (technically we should add some dimensions for preparation and make some allowance for the subjectivity of taste, but I'm keeping things simple).
This is a great example of a rugged landscape that everyone can relate to. You can find any number of delicious recipes made with the same half dozen or so ingredients. As you start deviating from one recipe (moving away from a local optima), the results tend to get worse initially, even if you're moving toward a better recipe.
Approaching something as a rugged landscape can provide powerful insights and very useful tools, which leads to another concern about economic exceptionalism -- economics as a field tends to make little use of these models and many economists routinely make modeling assumptions that simply make no sense if the surface being modeled really is rugged.
I asked Noah Smith* about this and as part of his reply he explained:
But for analyzing the equilibrium state of the economy - prices and quantities - economists tend to try as hard as they can to exclude multiple equilibria. Often this involves inventing arbitrary equilibrium criteria with zero theoretical justification. This is done routinely in micro (game theory) as well as in macro. An alternative procedure, commonly used in macro by DSGE practitioners, is to linearize all their equations, thus assuring "uniqueness". Some researchers are averse to this practice, and they go ahead and publish models that have multiple equilibria; however, there is a strong publication bias against models that have multiple equilibria, so many economists are afraid to do this. An exception is that some models with two equilibria (a "good" equilibrium and a "bad" or "trap" equilibrium) do get published and respected. Models with a bunch of equlibria, or where the economy is unstable and tends to shift between equilibria on its own at a high frequency, are pretty frowned upon.This doesn't mean that economists can't work with these concepts, but it does mean that as economists increasingly dominate the social sciences, approaches that don't fit with the culture and preferred techniques of economics are likely to be underused.
And some of those techniques are damned useful.
* now with source.
Monday, March 11, 2013
Some epidemiology for a change
John Cook has an interesting point:
One one hand, if the sudden cardiac death had occured in the placebo group, we would be extremely reluctant to advance this as evidence that the medication in question prevents death. On the other hand, rare but serious drug adverse events both exist and can do a great deal of damage. The true but trivial answer is "get more data points". Obviously, if this is a feasible option it should be pursued.
But these questions get really tricky when there is simply a dearth of data. Under these circumstances, I do not think that any statistical approach (frequentist, Bayesian or other) is going to give consistently useful answers, as we don't know if the outlier is a mistake (a recording error, for example) or if it is the most important feature of the data.
It's not a fun problem.
When you reject a data point as an outlier, you’re saying that the point is unlikely to occur again, despite the fact that you’ve already seen it. This puts you in the curious position of believing that some values you have not seen are more likely than one of the values you have in fact seen.This is especially problematic in the case of rare but important outcomes and it can be very hard to decide what to do in these cases. Imagine a randomized controlled trial for the effectiveness of a new medication for a rare disease (maybe something memory improvement in older adults). One of the treated participants experiences sudden cardiac death whereas nobody in the placebo group does.
One one hand, if the sudden cardiac death had occured in the placebo group, we would be extremely reluctant to advance this as evidence that the medication in question prevents death. On the other hand, rare but serious drug adverse events both exist and can do a great deal of damage. The true but trivial answer is "get more data points". Obviously, if this is a feasible option it should be pursued.
But these questions get really tricky when there is simply a dearth of data. Under these circumstances, I do not think that any statistical approach (frequentist, Bayesian or other) is going to give consistently useful answers, as we don't know if the outlier is a mistake (a recording error, for example) or if it is the most important feature of the data.
It's not a fun problem.
Subscribe to:
Posts (Atom)