Tuesday, March 31, 2009

OT: Dentists

I know that sometimes you need to inflict pain to make things better in the long run. And I am a happier person with the idea that I will have some sort of teeth in middle age.

But it is not pleasant to have three hours of solid dental drilling!!

:-(

Monday, March 30, 2009

More on Tenure

There is an interesting discussion on DrugMonkey about Tenure.

I think that the original comments are making a rather important point. High value research with a short term pay-off is ideally suited to the private sector. They have every advantage in pursuing these goals and lack the "overhead" of an academic institution. I know that discussions of comparative advantage can be complicated but this situation is one where the private sectors really are better poised to solve the questions.

The advantage of the academy is in long term planning and results. This environment gives stability and the ability to pursue dead ends. Even if the academy was better at some short term goals, it's still better to have it focus on the goals where the structure is advantaged relative to the private sector.

One argument against tenure involves the complicated issue of mandatory retirement. I think that this issue is not unique to academia and it is an independent issue from tenure. It is also unclear, in a world where pensions are so unstable, what the options are. Perhaps we need to reconsider ideas like seniority based salaries? I am not sure but I see this as a more general concern and only distantly related to the issue of tenure itself.

But the real issue seems to be whether or not the post-tenure world is good for the academy. I would argue that the answer is no. Perhaps I made a very bad decision to go back into academics at this time given the current pressures but I can't help but think that the levels of deprivation seen by junior academics are dysfunctional. Young Female Scientist talks about the sorts of deprivation that junior academics undergo; after a decade of such lowered standard of living why is it seen as being "lazy or dysfunctional" to want job security?

So I think that there are many good arguments for tenure and I think many of the "anti-tenure" arguments are red herrings.

Saturday, March 28, 2009

Academic Positions

Thoreau (whom I just don't read enough) has a great post on the issues with academic positions in bio-medicine. The recent doubling of the NIH budget has made it possible for the number of academics to dramatically increase. This increase led to people having very unrealistic expectations about academic jobs. In was in Physics in the 1990's when there was a contraction in the field -- I think it is fair to say that the future of Bio-Medicine is about to have some of the sam tragic outcomes.

The worst part is that I don't even have a decent alternate plan.

Cross Sectional Drug Effects

Probably the most frustrating thing in the life of a pharmacoepidemiologist is explaining why cross-sectional drug effects are impossible to estimate. People exposed to a drug at baseline have an outcome that is a composite of:

1) True Drug Effect

2) Underlying Disease Condition (indication for the drug)

It is impossible to separate these effects. So you have strange results when you analyze these data sets: such as anti-hypertensive medications often appear to increase blood pressure when you look at cross-sectional data.

This phenomenon makes it impossible to do any causal inference from a cross sectional drug study if the outcome is even remotely related to the indication. Much grief would be saved if we kept this feature of such studies in mind.

Thursday, March 26, 2009

Too many names

Has anybody else noticed that the conceptual idea of the difference in rates/incidence/prevalence of a disease in the exposed and unexposed has too many definitions?

I can think of papers that use: odds ratio, relative risk, cumulative incidence ratio, prevalence ratio, rate ratio, hazard ratio . . . All of which have subtle differences.

But none of which are used consistently.

I suspect that we could do a lot of good just to clean this terminology up!

Wednesday, March 25, 2009

Medication discontinuation

A lot of my work has been in the area of Pharmacoepidemiology. So it was with great interest that I read the commentary in the March 15th issue of the American Journal of Epidemiology by Dr. Noel Weiss. Dr. Weiss is a brilliant epidemiologist and so it is no surprise that his commentary clearly laid out the conceptual and practical problems associated with these studies.

The main problem is that people do not start (or stop) medication at random. They take medications to treat some underlying condition (thus leading to confounding by indication) and they stop for a number of reasons (including the treatment is completed). We know, for sure, that some drugs have withdrawal issues (consider morphine or SSRIs).

I've actually looked at this question with statin drug withdrawal and still worry about how successful we were at controlling for confounding factors (and, believe me, we did an enormous amount of analysis to see how robust the effect was).

But what is hard, in all of these studies. is separating the reason for stopping the drug from the outcome. If a participant stops an SSRI and has an increased risk of suicide is that a marker of:

1) The drug was not working to begin with

2) There were severe withdrawal issues

Separating these two factors is important! After all, if there is a period of increased danger than alternative monitoring for serious health events becomes an option.

But Dr. Weiss gives an excellent summary of all of the things that can go wrong in such analyses and why we need to be careful in interpreting them. So if you work in drug research at all, this article is definitely worth a look.

Tuesday, March 24, 2009

Mistakes

We all make mistakes. Perfection, much as we would like it to be a standard part of the human condition, is not something that any one of us can claim. But, when doing science, it can happen and it is one of the hardest things to do to admit to a mistake. In all of the cases that have happened with me, it occurs during the sanity checking phase of an analysis.

But the problem is that finding issues with the analysis (i.e. did we define a quantity correctly) unnerves collaborators. Rather than being happy that we are having a good "back and forth discussion" and being pleased that the issues have come up early, it seems to shake confidence.

I think that this tendency to react badly to these issues is actually harmful. Mostly because it makes analysts less likely to speak up if they suspect that something is wrong. And, should I ever become an independent investigator, I really want people to speak up if there is an issue with data or analysis.

So I am not sure how to structure these interactions to provide the best possible outcome!

Monday, March 23, 2009

Service work

The more epidemiology that I do, the less I like being an analyst. It is a very annoying job. People expect you to do data management, analysis and a fair chunk of the writing (methods section!) that is often the least interesting to craft. But there is a common tendency to do two things:

1) Make all decisions without talking to the analyst

2) Act like it is your fault if the data isn't ideal

I used to volunteer for these jobs because I thought that it would be fun and interesting to work with many different people. Now I must admit I am coming to loathe them!

Friday, March 20, 2009

Academic positions

Thoreau has an interesting article about the perverse effects that can happen when trying to encourage diversity. What I am beginning to realize is that the academic career path is a high risk/moderate reward path. Some students will end up with decent jobs that give freedom and opportunities to think; others will end up in quite different areas than they intended. But the standard academic program has a lot of the characteristics of a pyramid scheme in that the goal that is placed before PhD students, tenured faculty positions, is increasingly unrealistic.

In epidemiology we are seeing an increasing reliance on soft money positions. I am not sure, in the long run, whether this reliance is a good or a bad feature. But it sure makes the implicit bargain of "years of sacrifice" being rewarded less obvious.

But the real issue with faculty seems to be the training versus placement issues. Most of the other concerns are perverse effects of that particular "elephant in the room".

Thursday, March 19, 2009

Industry versus Academy

Mark sent me this article and I thought that it made an excellent point. Research is not right or wrong depending on the source. In this sense, there is a classic use of the Ad hominem fallacy occurring where we criticize the source of the research and not the content of the research.

I think that this brings up two issues, both important.

1) Transparency: A lot of the issues about epidemiological lie in the fact that methods are not standard or transparent. There is an art to doing an epidemiological study and no good mechanism in place to determine if the results were cherry-picked or the most honest representation of the ata possible.

2) Incentives: Let us be honest, everyone in the research game is motivated (at least in part) by financial incentives. I see a lot of researchers who are also motivated by a genuine desire to help solve serious medical problems. But I do not think that being in academia is a good test for this motivation as people go into industry for a lot of reasons.

But it is, without a doubt, in the best interest of a researcher to find something "interesting". One paper can make or break the transition from post-doctoral fellow to faculty. One paper can make or break a tenure application. This is not to say that fraud is widespread in the academy -- I rather expect that it is extremely rare.

But we are kidding ourselves if we do not admit that everyone with a career in research (whether in industry, government, or the academy) doesn't have at least some incentives to find exciting and interesting results. I think we mostly resist this temptation and focus on giving the most honest appraisal of the data that is possible; but we should never forget that all research can be driven by the rewards of actual discovery.

Wednesday, March 18, 2009

Tenure

It is no surprise that, like any junior academic, I have seriously mixed feelings about the slow erosion of tenure. I must admit that I agree with the author of Confessions of a Community College Dean that the current academic system (two tracks) is a symptom of a system in decline.

What is not addressed there, but which is really relevant to my area, is how the loss of tenure changes my job. As a researcher (without tenure and with no prospect of tenure), my career is now dependent on getting funding or working on projects that get funding. In a real sense this is the death of the "freedom to explore" that originally lured me into the academy. Now, it is quite true that there was never a utopian time where professors and post-docs could diligently pursue their whims where-ever these might lead. Certainly modern teaching loads are completely different (at least in medical schools) which really does change the character of the job.

Still, it means that my career will now be spent responding to requests for funding in a broad range of government initiatives. Long periods of funding are five years and it is quite possible to have the more appealing types of grants last two years. This is actually less stable than the contract system that the University of Kentucky is implementing!

It is not that great research can't be done under these conditions. But it does really change the definition of stability. I never thought, when I left the banking industry in 2002, that I would have less employment stability. More curious, I seem to have about the same freedom to innovate (I can innovate insofar as it advances a pre-specified set of goals).

It's certainly food for thought.

OT: What D&D class am I?


D&D Home Page - What Class Are You? - Build A Character - D&D Compendium

Tuesday, March 17, 2009

Pre-specificied Analysis

One thing that I always find challenging is how to handle changes in the analytic plan. If you send the first of result to the writing group to be discussed and they come back with "wouldn't it make more sense if . . ." then what do you do?

In one sense, this sort of constructive feedback can improve our understanding of an association and improve the paper. On the other hand, this rather does make the "p-values" less clear. If you pick the association with the lowest p-value are you optimizing on how to best present an association or are you picking a result that is optimized on the distribution of noise in the data.

It is pretty clear to me that with a pre-specified test of an association that you should stick the analysis plan. But what if you are exploring? Is there a rule for exploratory analysis?

OT: Firefly

I was visiting a coffee shop in Seattle that was decorated with a lot of Firefly themed pictures.

So today's post is entirely about good science fiction and the thing that everyone misses. The trick to good science fiction is to start with characters and writing that would work in a standard movie without any science fiction elements at all. Then the science fiction elements can enhance the story and add to the sense of wonder and possibility.

Star Wars could have been a story of a squire becoming a knight. Star Trek could have been a sailing ship in the age of discovery. Both are enhanced by science fiction elements.

But the series that may have had the best characters was Firefly. The newer Battlestar Galactica is trying to compete but the basic story of Firefly was interesting, rich and filled with characters we liked surprisingly quickly.

It really is a shame that we'll never get to see it end.

Friday, March 13, 2009

Irregular Observations

One phenomena that definitely annoys me is dealing with irregular observations. This occurs in contexts were data is passively collected based on when people get medical tests. For example, blood pressure is collected when you visit a medical doctor and this information can be used to assess trends in the population.

Here is the problem: people who have no readings often come from two very distinct groups. One is composed of very healthy people who simply have no need of medical services. The second is comprised of poor compliers who should seek medical care but don't. Obviously, the trajectory of these two groups is very different. And, equally obviously, it's hard to argue that these effects will cancel out in a real population.

Inference can still be done but it makes it hard to rule out subtle issues of bias.

Thursday, March 12, 2009

Missing Data

Is there any issue that is more persistent and more difficult to solve than missing data?

It takes a perfectly good study and layers assumptions on it. There is a clear divide in how to handle it. One option is to argue "why would you not want to use real data" and rejects the assumptions of imputation. Of course, this approach makes it's own set of strong assumptions that are often not likely to be met.

So you'd think that doing the right thing and modeling the missing data is the way to go? Well, it's an improvement but it is pretty rare that the assumptions of missing data technique are met (missing at random is just not accurate in real data).

So what do you do? Most of the time I recommend modeling (inverse probability weighting or multiple imputation) but I must confess that the lack of a solution that is actually good is rather distressing!

Wednesday, March 11, 2009

A few thoughts on reviews

One of the most frustrating things that we have to face as researchers are reviews. We all want people to recognize the hard work that went into developing papers and grants. None of us got to the point that we were sending material out for review except by putting in many years and a startling amount of hard work.

So it is very annoying when strange reviews come back. But I have learned that there are a few basic rules:

Should the paper be rejected:

1) Decide if the criticisms are substantive or stylistic. If substantive, then you need to either redevelop the paper or retire it. This is never a pleasant discovery but, in the long run, you'll be glad that a sharp reviewer caught an issue. In my experiences, true substantive critiques are rare.

2) If the criticism is stylistic then don't put time into it. Likely the next set of reviewers will have different preferences. Resubmit rapidly.

3) If the criticism seems to apply to another paper, entirely, then seriously consider rewriting the confusing sections for clarity. You are the subject matter expert -- it is not reasonable that reviewers and readers will necessarily follow all of the nuances.

In the same vein, as a reviewer, asking for massive redevelopment for purely stylistic reasons is often a poor choice. Ask whether the paper is methodologically valid (no point in letting mistakes into the literature) and relevant. These are the real questions that need to be considered.

Peer Review is a frustrating process but it can really improve work if you take advantage of it.

Tuesday, March 10, 2009

Peer Review

There is an interesting post over at Drug Monkey on the issue of bias in peer review. I think that there are two issues that really strike me as important in peer review and it is easy to confuse them. One is the idea that individual reviewers will have preferences for the types of research that they like to see done. This issue is difficult, if not impossible, to solve.

Two, and more annoying, is the issue of competence and coherence in review. I cannot enumerate the number of reviews that I have gotten that had questionable elements. I remember one journal claiming that they could not publish a paper that had an "unverifiable assumption" in it. The assumption in question, no unmeasured confounders, was a pretty standard assumption for all research. Even clinical trials have this issue with loss to follow-up not necessarily being at random.

But the reviewer is protected from strong complaints of "what were you thinking?". Now, I too have certainly done peer reviews that could have been better. I think we all can think of examples of this. So I am not claiming to be "special" and I am sure that I have been cursed by an author more than once for not "getting it".

But I think that these concerns are what gives anonymous peer review it's bad name.

Monday, March 9, 2009

NIH Challenge Grants

I am beginning to think that the NIH challenge grants are a cunningly disguised trap. The NIH is giving out $200 million for grants that can be up to $1 million apiece and do not require preliminary data. This set-up could generate a lot of applications.

I think that it might make more sense to put that effort into a very solid R01 grant proposal and try to win longer term funding under less pressure of time and with more opportunity to generate productivity.

But, of course, the siren call of "no pilot data" is certainly sounding in my ears too!

Sunday, March 8, 2009

A reply to Andrew Gelman

A reply to Andrew Gelman's latest post where he links to an old post on propensity scores:

My understanding of the issue is that there was also a prevalent user problem (creating selection bias) at least partially due to time-varying risk. While this could have been found and modeled, I am unsure about how propensity scores give any advantage over a thoughtfully constructed regression model. Unless the study you are thinking of had a lot more power to estimate predictors of exposure than outcomes due to very few outcomes (but I don't believe that this was the case with the Nurse's Health Study).

I'm not saying that better statistical models shouldn't be used but I worry about overstating the benefits of propensity score analysis. It's an extremely good technique, no question about it, and I've published on one of it's variations. But I want to be very sure that we don't miss issues of study design and bias in the process.

Issues of self-selection seriously limit all observational epidemiology. The issue is serious enough that I often wonder if we should not use observational studies to estimate medication benefits (at all). It's just too misleading.

This minor point of disagreement aside, I freely admit that Andrew Gelman is one of my heroes in the statistical community. I love some of his posts. His work on statistical significance is incredibly thought provoking, very helpful in clarifying thought and a must read for any epidemiologist.,

Saturday, March 7, 2009

Self Selection

In a lot of ways, I think that different forms of self selection are the biggest threat to study validity in observational epidemiology. We see it in loss to follow-up when participants select out of clinical trials. We see it in important exposures like diet, exercise and alcohol use where the exposure is likely correlated with many other health seeking behaviors. Heck, we know that being adherent to placebo therapy is associated with good outcomes.

So the trick seems to isolating the effect of a single exposure. It is the process of thinking up ways to do this isolation that allows epidemiologists to really earn their keep.

Friday, March 6, 2009

Why do observational epidemiology?

Observational epidemiology studies are often the source of highly misleading results. And yet, despite this problem, they are essential to the better understanding of human health. There are many exposures, some of them quite critical, that cannot be studied in any other way.

My personal favorite is adverse drug effects. Clinical trials are often underpowered to detect adverse events; in order to show these effects trials often need to be combined. Given the logistics involved, it is helpful to show an association between the drug in question and adverse events in real populations.

I hope to discuss the many interesting challenges and ideas in this field of research as I try to muddle towards some sort of resolution in a confusing sea of fuzzy data.