Showing posts with label Andrew Gelman. Show all posts
Showing posts with label Andrew Gelman. Show all posts

Wednesday, July 31, 2013

General versus particular cases

Andrew Gelman did a very interesting article in Slate on how being overly reliant on statistical significance can lead to spurious findings.  The authors of the study that he was critiquing replied to his piece.  Andrew's thoughts on the response are here

The led to two thoughts.  One, I am completely unimpressed by claims that a paper being in a peer-reviewed journal -- that is a screen but even good test have false positives.  All this convinces me of is that the authors were thoughtful in the development of the article, not that they are immune to problems.  But this is true of all papers, including mine. 

Two, I think that this is a very tough area to take a single example from.  The reason is that any one paper could well have followed the highest possible level of rigor, as Jessica Tracy and Alec Beall claim they have done.  That doesn't necessarily mean that all studies in the class have followed these practices or that there were not filters that aided or impeded publication that might enhance the risk of a false positive.

For example, I have just finished publishing a paper where I had an unexpected finding that I wanted to replicate (that there was an association was a priori, the direction was reversed from the a priori hypothesis).  I found such a study, added additional authors, added additional analysis, rewrote the paper to be a careful combination of two different cohorts, and redid the discussion.  Guess what, the finding did not replicate.  So then I had  the special gift of publishing a null paper with a lot of authors and some potentially confusing associations.  If I had just given up at that point, the question might have been hanging around until somebody else found the same thing (I often used widely available data in my research) and published it. 

So I would be cautious about multiplying the p-values together for a probability of a false positive.  Jessica Tracy and Alec Beall:
The chance of obtaining the same significant effect across two independent consecutive studies is .0025 (Murayama, K., Pekrun, R., & Fiedler, K. (in press). Research practices that can prevent an inflation of false-positive rates. Personality and Social Psychology Review.)
I suspect that this would only hold if the testable hypothesis was clearly stated before either study was done.  It also presumes independence (it is not always obvious that this will hold as design elements of studies may influence each other) and that there isn't a confounding factor involved (that is causing both the exposure and the outcome).

Furthermore, I think as epidemiologists we need to make a decision about whether these studies are making strong causal claims or advancing a prospective association that may led to a better understanding of a disease state.  We often write articles speaking in the later mode but then lapse into the former when being quoted. 

So I guess I am writing a lot to say a couple of things in conclusion. 

One, it is very hard to pick a specific example of a general problem when it is possible that any one example might happen to meet the standards required for the depth of inference being made.  This is very hard to ascertain within the standards of the literature. 

Two, the decision of what to study and what to publish are also pretty important steps in the process.  These things can have a powerful influence on the direction of science in a very hard to detect manner. 

So I want to thank Andrew Gelman for starting this conversation and the authors of the paper in question for acting as an example in this tough dialogue. 



Tuesday, January 22, 2013

More reflections on a study (EDITED)

Okay, before I get to the meat of this post, this quote by Andrew Gelman is dynamite:

. . . I’m generally suspicious of arguments in which the rebound is bigger than the main effect.
 
How many "counter-intuitive" studies would survive this kind of skepticism.  Not that a rebound effect can't be larger, but like many unlikely things it requires a higher level of proof.

The context is an education study which suggests that the more parents pay for education the lower the grades of the student will be.   The authors apparently tried to control for a lot of possible confounders (like SAT scores) but the whole process ends up looking like "what not to do in regression analysis". 

There is an intermediate variable (problem), a restriction of range problem (extrapolating parental support out to values that exceed annual income), and an issue with differential drop-out that does not seem to be addressed.  All of these points are present in Andrew's nice write up

What I want to focus on is the sharp counter-factual.  I am not always a fan of counter-factual reasoning, but I think that it would provide a ton of clarity in this case.  The real claim is that if you decreased exposure X (parental support) then you would increase outcome Y (GPA).  The direct causal model would suggest that the fastest way to improve student grades would be to make your contributions zero.  But, the last time I looked, Pell grants require a non-zero parental contribution in most cases (it is a little hard to tell precisely what the thresholds are but they definitely are not zero for most students).  So clearly this is a floor on parental contributions (and if it was the only source of contributions the effect would become the richer the parents the worse the grades of the student). 

So maybe, to have a realistic counter-factual, the exposure should be dollars of support above the minimum expected contribution? 

So, really what we have to be talking about the the effect of a marginal dollar separate from the (non-linear) scale of what the parents are required to pay.  But, even there, the direction is unclear.  Imagine that your not especially inspired child gets admission to Stanford but they are struggling with the material.  Do you insist, on principle, that they get a job or do you pay more so that they have a better chance to be a "C average" Stanford graduate (which is much better than a Stanford drop-out). So the causal direction is actually unclear. 

But if the idea is that giving more resources to students decreases performance then there are a lot of experiments we could try.  For example, we could decrease wages (for everyone including upper managment) and see if performance goes up.  Or we could randomize students to improved levels of support.  Better yet, we could look at experiments that have already been done:

 

We examine the impacts of a private need-based college financial aid program distributing grants at random among first-year Pell Grant recipients at thirteen public Wisconsin universities. The Wisconsin Scholars Grant of $3,500 per year required full-time attendance. Estimates based on four cohorts of students suggest that offering the grant increased completion of a full-time credit load and rates of re-enrollment for a second year of college. An increase of $1,000 in total financial aid received during a student’s first year of college was associated with a 2.8 to 4.1 percentage point increase in rates of enrollment for the second year.
 
 So not only is the main effect in the opposite direction (at least in terms of retention) but it has precisely the impact on a GPA analysis that Andrew expects: students are more likely to leave with lower levels of support.  Do we think that leaving school is completely independent of performance (that there is no GPA difference between the drop-outs and those who persist)?  Or is parental support different, in some magic way, than government grant support?  People are more careful stewards of government money than they are of money from their close community (and think about what this would mean for charity versus government welfare programs, if true)?

I agree that the current form of this study is impossible to interpret. 

[EDIT: Talking with Mark, it is clear that I was unclear on one point above.  The experiments show money from a specific source (i.e. government funding) go in a specific direction but don't at all address whether money from parents has a similar causal effect (Mark is promising to talk about this in a post himself).  The issues of selection, intermediate variables, and experimental evidence from other sources are all important, but without re-analyzing the data it is impossible to prove the directionality of the bias.  As an epidemiologist I am trained to speculate on bias direction/strength but I recognize that is all I am doing.  ]

Friday, October 26, 2012

When you start a sentence with "one good indicator of a person who’s not so smart," you should be extra careful about what you say next

Andrew Gelman spends some time on this latest quote from Steven Levitt on the rationality of voting:
DUBNER: So Levitt, how can you in your life, when you wander around, tell the difference between a smart person and a not-so-smart person?

LEVITT: Well, one good indicator of a person who’s not so smart is if they vote in a presidential election because they think their vote might actually decide which candidate wins. . . . there has never been and there never will be a vote cast in a presidential election that could possibly be decisive.
Gelman has been riding this beat for a long time, repeatedly pointing out the flaws in this strangely persistent argument. He makes a good case (part of which I basically paraphrase in point one), but there are other problems with Levitt's claims.

Here's a brief and certainly incomplete list of objections.

1. Every vote affects the probability distribution of a race, and since the difference in outcomes is so large, even a tiny change in probabilities can conceivably create a detectable change in expected value

2. Every vote in every race. Except for undervoting, we're talking about the combined impact for the entire ballot.

3. This isn't binary. The margin of a win can affect:

Perceived mandate and political capital;

Officials' decisions (particularly in non-term-limited positions). Congressmen who win by large margins are less likely to feel constrained about unpopular votes;

Funding. A lopsided defeat can make it harder for a candidate or a state party to raise money;

Party strategy. How much effort do you expend finding a challenger against an official who beat you by more than ten points last time?;

Media narrative.It's possible to come back after press corp has labeled you a loser, but it isn't easy.

and finally

4. The system works better with higher response rates. It's more stable and harder to game. Perhaps even more important, it does a better job representing the will of the governed.

That's the top of my head list. Undoubted, I missed some.

Gelman goes on:
I would not conclude from the above discussion that Levitt is not so smart. Of course he’s very smart, he just happens to be misinformed on this issue. I applaud Levitt’s willingness to go out on a limb and say controversial things in a podcast, to get people thinking. I just wish he’d be a bit less sure of himself and not go around saying that he thinks that Aaron, Noah, Nate and I are not so smart.
He's being overly diplomatic. Levitt isn't just misinformed; he's willfully misinformed. In issue after issue (drunk driving, car seats, solar energy) he has used sloppy reasoning to reach a controversial position, then has done his best to turn a deaf ear to those who pointed out his errors. We did get a partial retraction of his claims on driving*, but on others he has doubled down and occasionally resorted to cheap shots at those who disagreed with him.

Levitt is very smart. That's what makes this sort off thing so difficult to overlook.




* Though still leaving potential errors unacknowledged, such as the likely possibility that drivers in accidents are more likely to be checked for intoxication than pedestrians, that a stricter standard might be used, that many of the most intoxicated are prevented from driving and that intoxication is more likely to be noted in official records for drivers

Tuesday, September 4, 2012

Why Observational Epidmeiology is frustrating

Andrew Gelman has a post up on the history of cigarette smoking research, based on a book he was reading a while back.  It's pretty interesting but what really caught my eye was this comment:

Vague statistical inference can not possibly establish such a causal link. Even valid associative inference should establish a 50-100% correlation between smoking and cancer, but it does not even come close. Most people who smoke don’t get lung cancer, and at least 10% of Americans who do get lung cancer- do not smoke. There are also huge international/ethnic variations among smokers and cancer rates. There is currently no proof whatsoever for the alleged smoking-cancer causal link. None. Smoking is a disgusting and silly habit. But all that one can now objectively say is that it is a risk factor for cancer and increases the incidence of lung cancer.
 
And this was not the only person in the comments who was casting doubt on this association.  As an epidemiologist, I want to scream.  If people will not believe this evidence then they really will not believe any level of evidence for observational epidemiology.  We have cohort studies going back 50 or more years (Richard Doll has one). Even better, the members of this cohort did not initially know that smoking was harmful (and I recall that the original hypothesis was automobile fumes and not smoking, although my memory may be failing me here).  So we don't even suspect a healthy abstainer effect. 

The requirement for a 50 to 100 correlation seems to ask for smoking to be directly causal of lung cancer instead of increasing the underlying risk of lung cancer.  Consider skiing and broken legs.  Not all broken legs are due to skiing and many people ski and do not break a leg.  But there is no question that skiing is a risk factor for broken legs.  Another good example is collapsed disks in the back.  If you are working with a veterans population, the first question you ask when you see a compression fracture in the spinal cord is "were you a paratrooper?".  Not all paratroopers have compression fractures and not all compression fractures are due to jumping out of airplanes, but it is a pretty direct link to increased risk. 

There is a libertarian line of defense here: people ski because they value the enjoyment of skiing more than the risk of a broken limb.  I am not always delighted by it, but it is at least an arguable position.  But directly denying the link between smoking and lung cancer seems to be setting a very aggressive standard of proof. 

Tuesday, May 29, 2012

Dual Class shares and free markets

Andrew Gelman weighs in on dual class shares

Now I’m just confused here. Who’s supposed to be “concerned” here? As a New Yorker subscriber, am I supposed to be concerned that dual-class firms underperformed the market? I just don’t get it. Why should I care? If the shares underperform the market, people can buy a piece of Facebook for less. That’s fine too, no?
I think that Andrew would be completely correct in a perfect market (one in which all of Mark Thoma's issues are not present).  If some financial products give a piece of the return while others give ownership plus return then people could choose which ones to purchase. 

However, the modern American economy has fallen in love with the 401(k) as an investment vehicle.  This leads to two problems.  One, investors are generally not free to switch to a different fund because they dislike the investment decisions of the fund that they are in.  Since the individual investor bears all of the losses of bad decisions but the employer has control of the fund (and has an incentive to cut costs) you have a classic principal agent problem.

This problem is made worse by giving a limited group of people control over a group investment.  One can easily imagine the small group making decisions that benefit them at the expense of the majority of shareholders.  Again, not necessarily a problem in an open market.  But with the constraints that individual investors are under this could be problematic as they lack the freedom to enter or exit the market. 

This is why I wax poetic about Social Security (or the Canada Pension Plan): they shift the risk from small investors (who generally can't bear it) to large entities (that can).  I totally get that there are total social resources constraints, but I would rather that they be dealt with openly.  Instead I see the stock market becoming a worse and worse deal just as a large American cohort (the "Baby Boom") is about to retire. 

I am not sure that this is a good thing. 

See also Matt Ygelasis and Felix Salmon.

Wednesday, March 21, 2012

Freakonomics

Andrew Gelman's blog has a nice discussion of Freakonomics that is very topical given the discussion of Mike Daisy.  I think that he was a pretty balanced response to Stephen Dubner,who seemed to be rather distressed by the Andrew Gelman and Kaiser Fung response. Instead, I think that pointing out issues in a provocative and thought-provoking blog is essential. I admit that I often get very frustrated with the constant criticism of peer review. But it is essential to have errors pointed out and I have not seen a better way to have that happen then to have the mistakes repeatedly pointed out -- it sure makes me more careful as an epidemiologist.

One piece that I do think is worth reflecting on is this one:
Their first example of a “mistake” concerns a May, 2005, Slate column we wrote about the economist Emily Oster’s research on the “missing women” phenomenon in Asia. Her paper, “Hepatitis B and the Case of the Missing Women,” was about to be published in the Aug. 2005 issue of the Journal of Political Economy. At the time, Levitt was the editor of JPE, and Oster’s paper had been favorably peer-reviewed.  
Oster argued that women with Hepatitis B tend to give birth to many more boys than girls; therefore, a significant number of the approximately 100 million missing females might have been lost due to this virus rather than the previously argued explanations that included female infanticide and sex-selective mistreatment.  
Other scholars, however, countered that Oster’s conclusion was faulty. Indeed, it turned out they were right, and she was wrong. Oster did what an academic (or anyone) should do when presented with a possible error: she investigated, considered the new evidence, and corrected her earlier argument. Her follow-up paper was called “Hepatitis B Does Not Explain Male-Biased Sex Ratios in China.”
 I think that this missed the point of what was causing concern with this article.  An economist wanders into public health and overturns the conventional wisdom completely by considering a possible predictor but not really understanding why epidemiologists had not considered a disease-based explanation before.  It should not be considered a small point that the article showed up in an economics journal and not in a journal where it would be reviewed by experts in the clinical area. 

Is this necessary wrong to have reported potentially exciting new results?  No.  It is also true that people did put the effort into reporting when the understanding changed.  But this was in a well developed area of public health with very high policy stakes and people willing to put in a lot of effort to understand if there could be an alternate explanation.  So it induces some skepticism about "counter-intuitive" claims in areas where there are not the resources to scrutinize these claims deeply. 

Now it is natural that research has an error rate.  I wish it did not (especially not my research).  But it does point out the hazards of popularizing prelimary results.  I think I am especially sensitive to this issue as no field is more guilty of alarming and counter-intuitive findings than pharmacoepidemiology.  So I look for clues that make me cautious about publicizing preliminary results before they are really ready for prime time. 

Monday, December 19, 2011

Can we do observational medical research?


Andrew Gelman has a really nice post on observational medical research.  How could I not respond?

In the post he quotes David Madigan who has a fairly strong opinion on the matter:
I’ve been involved in a large-scale drug safety signal detection project for the last two or three years (http://omop.fnih.org). We have shown empirically that for any given safety issue, by judicious choice of observational database (we looked at 10 big ones), method (we looked at about a dozen), and method setup, you can get *any* answer you want – big positive and highly significant RR or big negative and highly significant RR and everything in between. Generally I don’t think there is any way to say definitively that any one of these analysis is a priori obviously stupid (although “experts” will happily concoct an attack on any approach that does not produce the result they like!). The medical journals are full of conflicting analyses and I’ve come to the belief that, at least in the medical arena, the idea human experts *know* the *right* analysis for a particular estimand is false.

This seems overly harsh to me.  Dr. Madigan (who I think is an amazing statistician) is working with OMAP, which I recall as being comprised of data sets of fairly low quality data (prescriptions claims for Medicare/MedicAid, GPRD and other clinical databases, and these sorts of databases).  It is a necessary evil to get the power to detect rare (but serious) adverse drug outcomes.  But these databases are often problematic when extended beyond extremely clear signal detection issues.  

The clearest example of high quality medical data is likely to be randomized controlled double-blinded clinical trials.  But there is a whole layer of data between these two extremes of data quality (prospective cohort studies, for example) that has also generated a lot of important findings in medicine.

Sure, it is true that the prospective cohort studies tend to be underpowered to detect rare adverse drug side effects (for precisely the same reason that RCTs are).  But there is a lot of interesting observational medical research that does not generate conflicting results or where the experts really seem to have a good grasp on the problem.  The links between serum cholesterol levels and cardiovascular events, for example, seems relatively solid and widely replicated.  So do the links between smoking and lung cancer (or cardiovascular disease) in North American and European populations.  There is a lot that we can learn with observational work.

So I would be careful to generalize to all of medical research.

That being said, I have a great deal of frustration with medical database research for a lot of the same reasons as David Madigan does.  I think the issues with trying to do research in medical claims data would be an excellent series of posts as the topic is way too broad for a single post.

Friday, December 2, 2011

Economics and Morality

I have noticed a rather poisonous idea that is starting to get noticed -- the link between moral virtue and economic success. Consider these two points below.

Matt Yglesias:

To be clear, I don't think we're looking at hypocrisy exactly. Instead it goes back to the preference for morality tales. Whoever is up at the moment must be up because of their greater moral virtues. I seem to have somehow missed the conservative articles lauding Germany and the Netherlands from back four or five years ago. Instead at the time I was reading lots of stories about the triumph of the Celtic Tiger, the genius of the flat tax in the Baltic states, articles praising Silvio Berlusconi and so forth. Certainly at no point during the Bush administration was there a lot of talk in the right-wing press about the evils of household debt, the overwhelming merits of current account surpluses, or any of the rest of it.


Andrew Gelman:\

Nothing here about “hardworking” or “virtuous.” In a meritocracy, you can be as hardworking as John Kruk or as virtuous as Kobe Bryant and you’ll still get ahead—if you have the talent and achievement. Throwing in “hardworking” and “virtuous” seems to me to an attempt (unconscious, I expect) to retroactively assign moral standing to the winners in an economic race.


The reason that I consider this to be a poisonous idea is that distracts us from the actual moral actions of the people involved. I'd consider the "all bosses are bad" to be an equally poisonous notion. Painting a social class with either virtue or vice is likely to distort thinking and policy in bad ways.

Economic success is good; we all like living in a world where utility is maximized. But I'd thought the linking of material success to moral virtue (consider the finale of the story of Job) had gone out of style. Instead we live in a world where people with significant moral failing (think Steve Jobs) can still contribute to economic success.

Linking these two concepts interferes with assisting the economic losers as it tends to attach the stigma of blame. This is not to say that hard work may not correlate with economic success but that it is important to remember that hard work does not necessarily lead to economic success.

Just ask a medieval serf!

Saturday, September 10, 2011

Type M Bias

From the pages of Andrew Gelman:

And classical multiple comparisons procedures—which select at an even higher threshold—make the type M problem worse still (even if these corrections solve other problems). This is one of the troubles with using multiple comparisons to attempt to adjust for spurious correlations in neuroscience. Whatever happens to exceed the threshold is almost certainly an overestimate.


I had never heard of Type M bias before I started following Andrew Gelman's blog. But now I think about it a lot when I do epidemiological studies. I have begun to think we need to have a two stage model: one study to establish an association followed by a replication study to estimate the effect size. I do know that novel associations I find often end up diluted after replication (not that I have that large of an N to work with).

The bigger question is whether the replication study should be bundled with the original effect estimate or if it makes more sense for a different group to look at the question in a separate paper. I like the latter more as it crowd-sources science. But it would be better if the original paper was not often in a far more prestigious journal than the replication study, as the replication study is the one that you would prefer to have the the default source for effect size estimation (and thus should be the easier and higher prestige one to find).

Friday, August 12, 2011

Stop by and see what you think

Andrew Gelman has some intriguing news about the possibility of reform in the institutional review board system.

Tuesday, July 19, 2011

Popular Micro-economics

Andrew Gelman writes:

Pop economists (or, at least, pop micro-economists) are often making one of two arguments:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with "why do they do that?" sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there's some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with "we can do better" claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.


[sorry for the general link, there seemed to be technical issue linking to the specific article]

I think that this insight is quite compelling and exposes a real issue with popular micro-economics. After all, these two positions mostly contradict each other!

I see this contraction arising for different reasons than Andrew does. I think that economists are often forced to make strong assumptions in order to deal with the sorts of complex problems that they look at. So they presume rationality on the part of all actors when trying to explain problems (i.e., descprive work). But when they want to improve matters they shift and reject this assumption (as it would suggest everything is already optimized). So they make different (strong and unverifiable) assumptions depending on whether they are trying to explain behavior or give guidence to improve outcomes.

What seems to be more alarming to me that is that this pool of economists often don't "sanity check" the effect size of their analysis. Perhaps it is my background in epidemiology, but if I see a hazard ratio of 5 then I am immediately suspicious that it is too good to be true. However, Ray Fisman can see an analysis that suggests firing 80% of teachers and not necessarily wonder if perhaps there is an overly strong assumption in his analysis (like the pool of new potential teachers not having finite limits or in the real sensitivity of the evaluation process).

Friday, June 3, 2011

One more thought

Mark, Andrew Gelman and I have all weighed on on the article written by David Rubinstein. So I think the high points have been hit.

One thing that I think was interesting, though, was his decision to be defiant about his career. Comments like:

Why do I put “worked” in quotation marks? Because my main task as a university professor was self-cultivation: reading and writing about topics that interested me. Maybe this counts as work. But here I am today—like many of my retired colleagues—doing pretty much what I have done since the day I began graduate school, albeit with less intensity.


Seemed to me to be extremely revealing. I think that one possibility is one of culture. I grew up in a social democracy and I would feel shame if I thought I was "milking the system". My father used to explain the low Swedish unemployment rate to me as being a cultural phenomeon (in Sweden the shame of not working is high).

On the other hand, I occasionally see people bragging about how they bilked the system as if this is a point of pride and not deep humiliation. I wonder if there is a cultural divide here as I would find behavior to be shameful. Consider this statement:

Committee meetings were tedious but, except for the few good departmental citizens, most of us were able to avoid undue burdens.


Why would he not be ashmaed not to have been a good departmental citizen? I am always worried that I am not carrying enough administrative burden in my department and concerned when another faculty member is overloaded.

Or this comment:

To be sure, some of my colleagues were prodigious researchers, devoted teachers, and outstanding departmental, university, and professional citizens. But sociologists like to talk about what they call the “structural” constraints on behavior. While character and professional ethics can withstand the incentives to coast, the privileged position of a tenured professor guarantees that there will be slackers.


So why are character and professional ethics not valued? Why does an confession as to a lack of professional ethics sound like an attack on the profession and not an admission of a personal failing. Mark pointed out the comments on his teaching ability and I have to admit it sounds like he wasn't the most focused professor ever.

Or perhaps I misunderstand some parts of this situation??

Thursday, June 2, 2011

University Professor Compensation

Andrew Gelman links to college professor David Rubinstein who makes a number of inflammatory comments under a headline that includes "Thank you, Illinois taxpayers, for my cushy life":

After 34 years of teaching sociology at the University of Illinois at Chicago, I [Rubinstein] recently retired at age 64 at 80 percent of my pay for life. . . . But that's not all: There's a generous health insurance plan, a guaranteed 3 percent annual cost of living increase, and a few other perquisites.


In my 34 years, just one professor in the sociology department resigned to take a nonacademic job. For open positions, there were always over 100 applicants, several of them outstanding. The rarity of quits and the abundance of applications is good evidence that the life of the college professor is indeed enviable.


Protests against efforts to reform pay scales, teaching loads, and retirement benefits employ a “solidarity forever, the union makes us strong” rhetoric. What these professors and other government workers do not understand is that they are not demanding a share of the profits from the fat-cat bourgeoisie. They are squeezing taxpayers—for whom the professors purport to advocate—whose lives are in most cases far harsher than their own.


I have a few comments to make about this particular issue. First of all, I notice that the professor in question is happy to brag about his success but is not proposing, for example, to stop accepting his pension. I am not sure why it makes sense to mock the people who support one's financial position. Is it considered good sense to mock customers in business?

Second, there are a lot of people in all fields of endeavor who do not earn their salary. What is it about academics that makes people think it might make sense to brag about being a poor performer?

Third, why is the focus on the cost to the taxpayer for academic pensions a focus? We subsidize a lot of activity -- for example, we tax hedge fund managers at captial gains tax rates. This is a very important subsidy on that activity but we do not hear comments about how hard it is for people in the working class to subsidize multi-billion dollar incomes.

I think Andrew Gelman makes the winning point here:

David Rubinstein appears to be somewhat of an extreme case of the underworked and overpaid professor: he taught at a low-ranked but high-paying institution, he got his Ph.D. at a time where they were giving out tenure-track slots like candy canes at Christmas, he (by his own admission) spends a total of less than one hour per week on class preparation, grading, and advising combined, and he got a contract in an era with generous retirement benefits.


It is unclear to me what action makes sense here, given these issues. Clearly, attacking current new professors does not seem to address the core issues (that David Rubinstein was hired in an exceptional era). It can only hurt people who have had to struggle up under much more difficult circumstances than he has. Which seems to be identical to his critique of academics in general (that they are priviledged and don't care about people who have had to deal with more difficult life circumstances).

I do know that my own experience of university life is completely different. I am in a resaerch position and I still spend a lot more time on teaching then he does, not to mention leading student research groups. My pension is defined contribution and my pay has just been cut as part of a state-wide austerity movement (after 1 year, that time when compounding means cuts hurt the most). It's hard work but I like the job. I do know that when I worked in industry, pay was higher and life was a lot less stressful.

It is a very odd article.

Saturday, May 28, 2011

Consistency

I was looking at Andrew Gelman's blog and saw his discussion of Ray Fisman's piece on compensation. I was especially struck by this statement:

Yet this aversion to pay cuts isn't good for workers or the American economy more broadly. More people end up losing their jobs than if wages were more flexible, and there are serious long-term consequences for the workers who lose their monthly paychecks. The negative impact on a worker's earnings, health, and even the earning prospects of his children lasts decades beyond the pink slip's arrival. Creative solutions—like the furloughs that cut government salaries in California and elsewhere—might help to make lower pay more palatable, by presenting the cut as a temporary measure and by creating at least the illusion of a lower workload. If we can find other ways of overcoming the simmering resentment that naturally accompanies wage cuts, workers themselves will be better for it in the long run.


I wonder how this links with tax increases. Greg Mankiw seemed to be quite displeased by the prospect of tax increases (which are a form of reducing compensation).

If this relation works the same way, then there are two implications:

1) Tax cuts are a terrible idea as a way of providing economic stimulus as they will do little to increase output (3 hours according to Fisman's experiment) but will be greatly resented if they are rescinded

2) High taxes (like low pay) seems to have little effect on income so if you rise taxes then you should raise them by a lot so you only feel the pain of resentment once.

I am not sure that this position is correct, but it sure is thought provoking.

Wednesday, April 6, 2011

Unemployment Forecasting

There was a nice discussion of the plausibility of the employment figures in the new Paul Ryan 2012 budget proposal for the United States by both Andrew Gelman and Paul Krugman. While this blog isn't really a political one, I do think that this current discussion is a good example of how to critically evaluate models in epidemiology. It is pretty rare that a model will be simply and obviously wrong. Instead, you have to look at the all of the different elements of the model and see what looks dodgy. After all, the actual headline result is almost always something for which we are uncertain about the actual answer. So we have to look for clues as what might be going wrong by looking at the other outputs of the model (and perhaps some of the modeling assumptions).

If a study shows that the use of statin class drugs prevents cancer that is a pretty interesting finding. But the finding gets less interesting if further exploration reveals that statins prevent all forms of disease except for cardiovascular disease. The latter would be a clue that something, somewhere, is going wrong.

In the case of the Paul Ryan budget, it seems like this estimate of unemployment is lower than it should plausibly be which might obfuscate the idea trade-off between taxes and economic growth. I am not an economist (in any way, shape or form) but am willing to conjecture that his dynamic scoring algorithm for the influence of tax cuts on unemployment might be an issue. Perhaps the algorithm should account for diminishing returns as unemployment falls (but fails to do so properly). Or maybe the model overstates the magnitude of the underlying relation (or, possibly, it might reverse it). Complex models have a lot moving parts and there are a lot of places that bias can be introduced into them. So it’s important to be critical (of both our own work and the work of others) when we try and do this type of difficult forecasting.

Sunday, February 6, 2011

Education and Poverty

One of the most difficult things in looking at education is that it is very hard to establish causal directions. Are bad teachers creating poor educational outcomes or are places with poor educational outcomes so difficult to teach in that the good teachers leave. It's an important question when you consider possible interventions.

This was very clear in these posts by Andrew Gelman and Jonathan Livengood on education and poverty. There is an impressive inverse association between poverty and educational test scores. It's pretty clear that there could be a common cause (i.e. a confounder) or that the direction of the causal association could go in either direction.

However, the policy decisions are very different depending on the causal relations involved. If poor schooling is a cause of poverty then it makes sense to focus massive efforts on educational reform. But if poor educational outcomes are caused by poverty then this suggests a completely different approach. And, of course, finding a confounder is also a key point.

Now, this assumes that all parties are acting in good faith. If there was an interest in keeping outcomes poor in MS and CA (to pick two low performing states) then educational reforms that worsened outcomes might make sense. This would be a good strategy if one wanted to continue to use "poor educational outcomes" as a talking point as reforms take decades to really take effect.

Finally, one has to worry that real issues (poor educational outcomes) might be used to implement ideological views (i.e. a dislike of unions) when such reforms might be orthogonal to the issues at hand. This wastes the effort of reform without addressing the underlying issues.

Now, none of this is to accuse any group of bad faith. But it is important to realize that there are multiple viable hypotheses as to the causes of poor educational outcomes. The state that I am the most interested in is California; they seem to have very poor outcomes for a large state with a diverse (and often knowledge-based) economy. If that is a reflection of resources expended then maybe we need to focus on counter-intuitive approaches (like raising taxes to improve educational funding) if we really want to make a difference.

Sunday, January 30, 2011

Mini-colloquium: Homework

One of the cool things about about Andrew Gelman's blog is the quality of discussions you'll see in his comment section. The observations are almost always well-informed and the tone strikes a nice balance between spirited and civil.

Prof. Gelman's recent post on homework (which responded to this post we did earlier on OE) prompted one of these excellent discussions. Joseph has already built a post around one particularly interesting comment, but all of them are worth reading.

Saturday, January 29, 2011

Educational Lotteries

Hoisted from the comments on Andrew Gelman's web site, Steve Sailor claims:

In my one experience with a charter school lottery, the charter school made it quite difficult to enter the lottery; and to find out if your kid was selected, you had to go there and ask them. And, it appeared, it wasn't a random lottery at all. My son was known to the founders of the school as a star student, so when I showed up and nervously asked if his name had been picked, I was told, Don't worry about it, of course he's in.


This is a very good point. There have been some very good comments on the use of charter school lotteries as a gold standard. I think there are persuasive reasons to be concerned, even if the lotteries are fair. However, it would be in the self interest of charter schools to accept a few "star students" outright rather than lave their selection to chance. Insofar as this happens at all, we would no longer have randomization (and thus would be unable to use these lotteries to estimate causal effects, even if the other concerns were not present).

So it seems increasingly hard to treatment there lotteries as a valid instrumental variable.

Saturday, January 22, 2011

Note to Gelman -- first fill its mouth with salt, then light candles, then decapitate

Andrew Gelman is once again going after the voting-is-irrational zombie (disinterred this time by the Freakonomics team). Gelman shows, using estimates that if anything err on the conservative side, that the possibility of influencing an election, though small, can still easily be associated with a reasonable expected value.

This particular zombie has been shambling through the dark corridors of pop econ books and columns for years now (Gelman himself has been monster hunting since at least 2005), but every time the creature seems truly dead and buried, along comes someone like Landsburg or Levitt, someone who's smart enough and mathematically literate enough to know better, but who just can't resist digging up the grave.

Tuesday, January 11, 2011

Andrew Gelman on the methodological attribution problem

Andrew Gelman has a post in which he brings up the following insight:

This sort of thing comes up a lot, and in my recent discussion of Efron's article, I list it as my second meta-principle of statistics, the "methodological attribution problem," which is that people think that methods that work in one sort of problem will work in others.


I think that this concern is especially key for scientists who are moving between fields. The ideas and techniques in my field have been honed to a fine edge dealing the types of biases and design issues that often occur in our problems. I focus a lot on issues like "confounding by indication" and a lot less on other issues that can be very important in other fields. If I moved to another field, say economics, I might easily focus too much on small points (that really are not an issue in economics research) and yet miss the major points in the field. This type of translation issue is not inevitable but it is worth keeping in mind.