Via Tyler Cowen, there is this new article.
So, because health research is being cited, Cheerios are a drug? So what is the rationale for regulation of this product? Patient safety? I fear that denying patients food because clinical research has not been done would seem to be an unwise idea.
I do think that poor regulation of products can lead to harm but there does seem to be a difference of kind here. Nor do I think that merely citing research should be an issue: there should be more careful research into diet and not less.
Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Thursday, May 14, 2009
Wednesday, May 6, 2009
What should your final model be?
In the decision to present the best possible model based on the weight of evidence, should you present the results of Bayesian Model Averaging or of a single selected model based on variables that had strong posterior probabilities in the BMA analysis?
I think you have to present the posterior probabilities. They actually contain MORE information than the selected model. More importantly, the p-values in a selected model don't seem to give a good sense of model uncertainty. This feature is something that I dislike about all automated model selection approaches in that they don't represent competing models that were close to the final model in likelihood but just happened not to be selected.
But it looks like almost no substantive papers use BMA. Why is this?
I think you have to present the posterior probabilities. They actually contain MORE information than the selected model. More importantly, the p-values in a selected model don't seem to give a good sense of model uncertainty. This feature is something that I dislike about all automated model selection approaches in that they don't represent competing models that were close to the final model in likelihood but just happened not to be selected.
But it looks like almost no substantive papers use BMA. Why is this?
Tuesday, May 5, 2009
Measuring changes
Is there anything trickier than measuring changes? Probably, yes, but this has got to be competitive for the difficulty in translating clear concepts into analysis. We can all describe, using ordinary language, the idea that a person may become more physically active with time.
But how do we operationalize this using the tools available in the standard cohort study. Do we look at change? What about change conditional on baseline? But the second quantity is going to have issues with measurement error.
What about categorizing things? Sadly, this creates the annoying problem of regression to the mean and ignores the bigger question of how do we define a meaningful threshold. Let alone the issue of reducing the information content of a continuous variable by breaking it into two categories. This reduction might be defensible if these categories align with clear clinical guidelines and so make results easier to operationalize. But many variables don't have this lucky property.
The more I look at this, the trickier the whole thing becomes.
But how do we operationalize this using the tools available in the standard cohort study. Do we look at change? What about change conditional on baseline? But the second quantity is going to have issues with measurement error.
What about categorizing things? Sadly, this creates the annoying problem of regression to the mean and ignores the bigger question of how do we define a meaningful threshold. Let alone the issue of reducing the information content of a continuous variable by breaking it into two categories. This reduction might be defensible if these categories align with clear clinical guidelines and so make results easier to operationalize. But many variables don't have this lucky property.
The more I look at this, the trickier the whole thing becomes.
Monday, May 4, 2009
Mission Creep
Professor in Training has a post on the amazing amount of time that it takes to certify a lab and/or a researcher. I find it very interesting how there has been a slow increase in the amount of overhead to do research. Each step may be both reasonable and justified. But the total effect can be quite counter-productive.
And, I must admit, that I am not altogether clear that it is a better world when the key skill for a junior professor is negotiating bureaucracy and/or finding ways around regulation. I know why making these rules apply to everyone is useful (as people avoid these things if they have any chance at all -- or at least I do). But maybe we should learn to balance total impact on research with protection of lab-workers and research subjects?
And, I must admit, that I am not altogether clear that it is a better world when the key skill for a junior professor is negotiating bureaucracy and/or finding ways around regulation. I know why making these rules apply to everyone is useful (as people avoid these things if they have any chance at all -- or at least I do). But maybe we should learn to balance total impact on research with protection of lab-workers and research subjects?
Friday, May 1, 2009
Sick Days part 2
A nice follow-up to my post yesterday about the public health issues of poor sick leave policy. This area is a place where there is a bad tendency to spout what good practice is and then put incentives in place that make following such practices highly problematic at best.
Thursday, April 30, 2009
Sick Days
Do we get enough here in the United States? I wonder what the contribution to public health would look like if we made it unacceptable to show up to work or school ill. In the beginning this approach would increase absenteeism but, in the long run, it might reduce the spread of diseases across the population very effectively.
The issue is more that we have no mechanisms for verifying that people are actually ill (and herding them into an MDs office isn't an ideal solution as it pools all of the ill people together to infect older and sicker patients there for other health concerns). Add in a cultural dislike of reasonable accommodation and it is not surprising that people comply with the incentives to show up to work or school ill.
Don;t you wish that there was a better way?
The issue is more that we have no mechanisms for verifying that people are actually ill (and herding them into an MDs office isn't an ideal solution as it pools all of the ill people together to infect older and sicker patients there for other health concerns). Add in a cultural dislike of reasonable accommodation and it is not surprising that people comply with the incentives to show up to work or school ill.
Don;t you wish that there was a better way?
Wednesday, April 29, 2009
Academic Reform
Any attack on an entrenched institution seems to draw strong feelings as to whether the critique is justified. However, this one seems to be flawed even by the standards of people with serious concerns about the current model for scientific research.
In general, I think questioning a model is good and necessarily to ensure strong future research. But there are several things that I think need to be kept in mind.
1) If it is obvious, has universal agreement and yet is not being done then there might be a reason for that. The idea that science and liberal arts degrees should better prepare students for work in the real world is an old idea; that it isn't being done is both because it is hard and because it diverts students from research productivity.
Incentives matters and so does viability.
2) Reforming tenure is an old canard. Let me make one point. At best speed, to become a tenured professor, one needs to spend 4 years as an undergraduate, 6 years as a graduate student and possibly as many as 5 years as a post-doctoral fellow (where salaries are typically well below what could be obtained elsewhere). After this 15 year preparation, professors then have 5 years to win grants and prove themselves worthy of tenure. This is a 20 year preparatory program, much of it at very low wages.
It is true that the reward system is loaded at the back end but reforming tenure to make being a senior professor less rewarding should ideally also consider the poor working conditions, long hours and low pay of the multi-decade training program!
Just a thought.
I really liked MsPhD's concern about what happens in a world of pure "soft money" positions. I am starting to suspect that the downsides of this paradigm (at least as currently implemented) outweigh the upsides.
In general, I think questioning a model is good and necessarily to ensure strong future research. But there are several things that I think need to be kept in mind.
1) If it is obvious, has universal agreement and yet is not being done then there might be a reason for that. The idea that science and liberal arts degrees should better prepare students for work in the real world is an old idea; that it isn't being done is both because it is hard and because it diverts students from research productivity.
Incentives matters and so does viability.
2) Reforming tenure is an old canard. Let me make one point. At best speed, to become a tenured professor, one needs to spend 4 years as an undergraduate, 6 years as a graduate student and possibly as many as 5 years as a post-doctoral fellow (where salaries are typically well below what could be obtained elsewhere). After this 15 year preparation, professors then have 5 years to win grants and prove themselves worthy of tenure. This is a 20 year preparatory program, much of it at very low wages.
It is true that the reward system is loaded at the back end but reforming tenure to make being a senior professor less rewarding should ideally also consider the poor working conditions, long hours and low pay of the multi-decade training program!
Just a thought.
I really liked MsPhD's concern about what happens in a world of pure "soft money" positions. I am starting to suspect that the downsides of this paradigm (at least as currently implemented) outweigh the upsides.
Tuesday, April 28, 2009
Bayesian Priors
I know that it is difficult to actually create a prior that substantively influences a point estimate (except by design). But I always find it hard to justify a default prior. Is it merely that Ihave not spent enough time thinking about my data? Or am I missing some sort of trick to this process?
Aetiology
This epidemiology site has an excellent coverage of the current swine flu outbreak. It is nice to see some smart commenting on the progress of this disease and the Battlestar Galatica quotes are an added bonus.
Monday, April 27, 2009
Swine Flu
Well, it looks like there is a case in Northern California now. Joy!
It is unclear, when the numbers are so low, what will be the size of the epidemic and what will be the mortality rate. The rate of 173 death out of 1,995 infections would be alarming if we really knew that 1,995 was the correct denominator. However, there is likely to be considerable under-ascertainment in a country like Mexico among the survivors (especially those who did not seek medical care). This factor is likely to be especially important if reports of patients being turned away from medical care turn out to be widely true.
On the flip side, with only 40 cases in the US, many of which may be at the beginning of the course of the disease, we have no idea what the mortality rate is likely to be. It is still possible for it to be 0%; at which point we'll look like we over-reacted.
I think that this is a good example of the challenges of problems with lagged information (incubation times) and geometric growth rates. The point at which intervention can be effective seems to also be the point at which there is not enough data to distinguish a serious public health issue from a decidedly mild outbreak.
It is unclear, when the numbers are so low, what will be the size of the epidemic and what will be the mortality rate. The rate of 173 death out of 1,995 infections would be alarming if we really knew that 1,995 was the correct denominator. However, there is likely to be considerable under-ascertainment in a country like Mexico among the survivors (especially those who did not seek medical care). This factor is likely to be especially important if reports of patients being turned away from medical care turn out to be widely true.
On the flip side, with only 40 cases in the US, many of which may be at the beginning of the course of the disease, we have no idea what the mortality rate is likely to be. It is still possible for it to be 0%; at which point we'll look like we over-reacted.
I think that this is a good example of the challenges of problems with lagged information (incubation times) and geometric growth rates. The point at which intervention can be effective seems to also be the point at which there is not enough data to distinguish a serious public health issue from a decidedly mild outbreak.
Thursday, April 23, 2009
OT: Truly bad ideas
Mark has a nice question about a suggestion from the Ohio militia to have 1,000,000 armed people march on Washington as a demonstration.
What could go wrong?
Somehow the mind boggles . . .
What could go wrong?
Somehow the mind boggles . . .
Pay if forward
MsPhD brings up an interesting point: what is the duty of people in the field to ensure the success of the next generation? Modern science has reached an interesting place where the talented and engaged scientist is able to continue to contribute for an extremely long period of time. In many cases, the ability to have people with this refined level of judgment around is really important.
But there is also a tension with the newer generation. In biomedicine we have seen less of this tension lately because there was recently a big expansion of funding. But now that we are getting back to a more stable funding paradigm, it is worth asking what is the balance between mentoring new people and continuing work.
It'd be more obvious if the barriers to entry were lower -- mentoring a new investigator is hard work; can or should we find creative ways to make it more rewarding?
But there is also a tension with the newer generation. In biomedicine we have seen less of this tension lately because there was recently a big expansion of funding. But now that we are getting back to a more stable funding paradigm, it is worth asking what is the balance between mentoring new people and continuing work.
It'd be more obvious if the barriers to entry were lower -- mentoring a new investigator is hard work; can or should we find creative ways to make it more rewarding?
Wednesday, April 22, 2009
You can never go home
Today's post by an post-doc [chall] reminded me of my own immigration experiences. It was a very depressing and demoralizing time period; one that I hope to never have to repeat.
But it has another, unexpected, consequence. It is really hard to go home for work. This idea of working for a while back home has come up, but leaving the US for an extended period of time for employment really appears to place the Green Card at risk. I could just wait it out, become a citizen and then consider such things. But that seems like a bad approach at a number of levels.
So there really is a "burning of the boats" feeling with my time in the US. It's probably a good thing that I like the environment here a lot. :-)
But it has another, unexpected, consequence. It is really hard to go home for work. This idea of working for a while back home has come up, but leaving the US for an extended period of time for employment really appears to place the Green Card at risk. I could just wait it out, become a citizen and then consider such things. But that seems like a bad approach at a number of levels.
So there really is a "burning of the boats" feeling with my time in the US. It's probably a good thing that I like the environment here a lot. :-)
Tuesday, April 21, 2009
Only in Canada
I encountered his thought provoking article on a science blog:
There are definitely some well founded concerns about this modest proposal, but the mere fact that it has such a strong argument underpinning it focuses us to reflect on the peer review system. Peer review is a massive burden and can lead to a "herd mentality". Some peer review is essential as you can't evaluate top experts in any other reasonable way (at least in the context of awarding grants).
Is there too much redundancy in the system?
Gordon, R. & B.J. Poulin (2009). Cost of the NSERC science grant peer
review system exceeds the cost of giving every qualified researcher a
baseline grant. Accountability in Research: Policies and Quality
Assurance 16(1), 1-28.
Basically, the authors point out that (given the denominator and the success rate of NSERC grants) that we might all be better off with just awarding a fixed award to all applicants. It seems to make sense only in the Canadian context where high success rates, low indirects and a lack of soft-money positions might actually make this approach viable. There are definitely some well founded concerns about this modest proposal, but the mere fact that it has such a strong argument underpinning it focuses us to reflect on the peer review system. Peer review is a massive burden and can lead to a "herd mentality". Some peer review is essential as you can't evaluate top experts in any other reasonable way (at least in the context of awarding grants).
Is there too much redundancy in the system?
Friday, April 17, 2009
OT: Pension Plans
A recent post has me wondering about how aware people are of how things are linked. A pension plan based on future government promise is no different than a pension plan that invests in government bonds at the most fundamental level. It doesn't matter if the promise is via a government program or via a government promise to pay out yields on bonds. Both require the future health of the government and a decision to honor obligations.
The history of bonds has many examples of where government policy worked to ensure that bonds (i.e. the basis of the National debt) were not paid. While countries like the United States and Canada are not Spain or Russia, even Britain has had some questionable episodes with national debt.
So why do people seem to assume that "defined contribution plans" based on government debt are safer than "defined benefit plans" like social security or the Canadian Pension Plan?
It is true that you could invest "defined contributions" into the stock market or corporate bonds instead. I venture to say that recent events have shown non-zero risk with corporate bonds. Stock markets have a bad tendency to have large shifts. So low risk investing is looking a lot like believing in government's ability to meet future debt obligations.
I think that once we accept this fact than the complexity of the situation is a lot more apparent.
The history of bonds has many examples of where government policy worked to ensure that bonds (i.e. the basis of the National debt) were not paid. While countries like the United States and Canada are not Spain or Russia, even Britain has had some questionable episodes with national debt.
So why do people seem to assume that "defined contribution plans" based on government debt are safer than "defined benefit plans" like social security or the Canadian Pension Plan?
It is true that you could invest "defined contributions" into the stock market or corporate bonds instead. I venture to say that recent events have shown non-zero risk with corporate bonds. Stock markets have a bad tendency to have large shifts. So low risk investing is looking a lot like believing in government's ability to meet future debt obligations.
I think that once we accept this fact than the complexity of the situation is a lot more apparent.
Thursday, April 16, 2009
How long should a post-doctoral fellowship last?
I think that we have a privileged view of this question in Epidemiology. None of the professors who were being hired when I started my PhD had ever done a post-doctoral fellowship (as other than a one year stint to get another university on their CV; only about 50% of them had done even that).
Now I am entering the second year of a post-doctoral fellowship and you can see the change where it is becoming common to do a multi-year fellowship. Although, I still see cases of people being hired into faculty directly out of the PhD or, more commonly, trying to be hired as faculty directly out of their PhD program.
So this makes me sympathetic to this post by MsPhd. It being widely panned elsewhere, but I wonder if the main point is being missed. Post-doctoral training is an ideal phase for "creep" in expectations. It's pretty clear that standards are rising in biomedicine for what it takes to succeed. Some of this is good -- a solid arms race will produce better outcomes. But the dark side of this process is people being kept back for very long periods of time. If this process leads to happy outcomes (or decent outcomes) for all involved than this is perfectly fine.
But what if it doesn't?
What is the ideal length of a fellowship training period? My field's traditional answer of "none" seems too short but more than five years seems too long. Or am I missing something?
Now I am entering the second year of a post-doctoral fellowship and you can see the change where it is becoming common to do a multi-year fellowship. Although, I still see cases of people being hired into faculty directly out of the PhD or, more commonly, trying to be hired as faculty directly out of their PhD program.
So this makes me sympathetic to this post by MsPhd. It being widely panned elsewhere, but I wonder if the main point is being missed. Post-doctoral training is an ideal phase for "creep" in expectations. It's pretty clear that standards are rising in biomedicine for what it takes to succeed. Some of this is good -- a solid arms race will produce better outcomes. But the dark side of this process is people being kept back for very long periods of time. If this process leads to happy outcomes (or decent outcomes) for all involved than this is perfectly fine.
But what if it doesn't?
What is the ideal length of a fellowship training period? My field's traditional answer of "none" seems too short but more than five years seems too long. Or am I missing something?
Wednesday, April 15, 2009
Tenure Track?
Over in blue lab coats, a comment was made about the downside of only accepting students that looked promising for tenure track positions. Namely, this criteria would bias against students who are unlikely to be accepted into tenure track positions.
Now, this bias can take one of two forms. There can be irrational discrimination (such as age or gender-based). I think that we can all agree that this type of discrimination is a bad thing and should be avoided.
But at a more pragmatic level, the scheme where one student in ten succeeds (and these appear to be optimistic odds) at the primary career path of a 5+ year high intensity training program seems to need revision as well. If the goal of the academy is to produce the next generation of academic researchers then we need to fess up to the unfair nature of over-production.
And don't get me started on the whole "non-tenure track" movement in medical research. Vile!
Now, this bias can take one of two forms. There can be irrational discrimination (such as age or gender-based). I think that we can all agree that this type of discrimination is a bad thing and should be avoided.
But at a more pragmatic level, the scheme where one student in ten succeeds (and these appear to be optimistic odds) at the primary career path of a 5+ year high intensity training program seems to need revision as well. If the goal of the academy is to produce the next generation of academic researchers then we need to fess up to the unfair nature of over-production.
And don't get me started on the whole "non-tenure track" movement in medical research. Vile!
Tuesday, April 14, 2009
Why clear methods are important
A really nice post was done today by DrugMonkey. I think that he issues that DrugMonkey raises are even more important in epidemiology than in bench science. In bench science you have the possibility of replication in a strict sense (and it is, in fact, a governing principle of bench science). In Epidemiology, the population changes between studies and it is difficult to compare between populations. As Heraclitus said "You can never step into the same river twice"; in Epidemiology you never have the same study population twice.
Perversely, this fact actually makes the methods more important as failure to replicate could also indicate unique features of a population. This makes it critical to be able to separate methodological issues from population issues, insofar as this is possible.
Given how badly medical papers seem to document decisions (part of it being a style issue for the field), there is a lot we could do to improve on matters.
Perversely, this fact actually makes the methods more important as failure to replicate could also indicate unique features of a population. This makes it critical to be able to separate methodological issues from population issues, insofar as this is possible.
Given how badly medical papers seem to document decisions (part of it being a style issue for the field), there is a lot we could do to improve on matters.
Monday, April 13, 2009
Who will be successful?
Post from Professor in Training and Physio-Prof have gotten me thinking about academic evaluation procedures. The discussion about who got hired and subsequent success makes me consider one of the difficult issues in academic hiring. You want to hire people who will be successful. But, most of the time, you don't know what success will look like. So you use proxy measures; some of which are blindingly unfair. For example, the institution that a person attended could be due to brilliance or it could be due to location, connections or a number of other factors.
The same thing is true of obtaining funding. This issue seems to be the major hurdle for success as a junior academic. I seem to have no trouble with publishing papers but I have had a series of miserable failures when applying for fellowships. I was never sure why success in one domain translated into abject failure in another. Or maybe I just never understood the CIHR criteria to fund students?
But if the measure of success of a fellowship is productivity then it is odd that I never obtained one as many of my peers who were easily offered several choices had far less success at producing research. In this sense, I find the academy more difficult to succeed in than my old career as as statistician. Back then, poor prognostic signs could be overcome with hard work, smart ideas and a lot of success. People stopped caring what your alma mater was once you become highly success.
In academics, failing to get a fellowship is a reason not to promote somebody further. So once you have one thing go wrong it is much harder to get back into the pipeline. These days I have half given up on a career path and mostly stick around doing cool research. I like what I do and that is rare enough that I have kind of stopped caring about the whole "career management" issues.
But it strikes me as a sub-optimal system in a lot of ways . . .
The same thing is true of obtaining funding. This issue seems to be the major hurdle for success as a junior academic. I seem to have no trouble with publishing papers but I have had a series of miserable failures when applying for fellowships. I was never sure why success in one domain translated into abject failure in another. Or maybe I just never understood the CIHR criteria to fund students?
But if the measure of success of a fellowship is productivity then it is odd that I never obtained one as many of my peers who were easily offered several choices had far less success at producing research. In this sense, I find the academy more difficult to succeed in than my old career as as statistician. Back then, poor prognostic signs could be overcome with hard work, smart ideas and a lot of success. People stopped caring what your alma mater was once you become highly success.
In academics, failing to get a fellowship is a reason not to promote somebody further. So once you have one thing go wrong it is much harder to get back into the pipeline. These days I have half given up on a career path and mostly stick around doing cool research. I like what I do and that is rare enough that I have kind of stopped caring about the whole "career management" issues.
But it strikes me as a sub-optimal system in a lot of ways . . .
Saturday, April 11, 2009
OT: Knights of the Old Republic
While I know I am late to the table in singing the praises of this game, the storyline is remarkable. It is a very nice implementation of an interactive story and seems to strike a good balance between the level of restriction required to make voice acting viable and freedom to do interesting things. It is a good plot with a rather well foreshadowed plot twist.
But why, for what possible reason, would they insert a "difficult to beat" (for my bad reflexes at least) first person shooter with a long sequence of videos between the last save point and the shooter. I think I have seen the same 5 minutes of video so many times that I want to scream!
Otherwise a fine game!
But why, for what possible reason, would they insert a "difficult to beat" (for my bad reflexes at least) first person shooter with a long sequence of videos between the last save point and the shooter. I think I have seen the same 5 minutes of video so many times that I want to scream!
Otherwise a fine game!
Thursday, April 9, 2009
Bayesian Model Averaging
I know that it is the best choice for selecting variables for predictive models. I know that it is massively superior to stepwise model selection. Chris Volinsky's page is a great resource for doing these models in R.
But I don't speak R and I have been resisting for a while. It's the decade of SAS training -- I am so fast in SAS that it feels like I am walking through water when I switch to R or STATA. I can do it but the experience is unpleasant, to say the least.
On the other hand, it's the right approach for a project that I am looking at right now. So maybe it is time to bite the bullet and use a new software interface?
But I don't speak R and I have been resisting for a while. It's the decade of SAS training -- I am so fast in SAS that it feels like I am walking through water when I switch to R or STATA. I can do it but the experience is unpleasant, to say the least.
On the other hand, it's the right approach for a project that I am looking at right now. So maybe it is time to bite the bullet and use a new software interface?
Tuesday, April 7, 2009
When non-linear relationships attack
In a lot of cases, in epidemiology, it is conventional to assume that relationships are linear. In a lot of cases this assumption seems to be pretty reasonable. Occasionally you get a relationship where very large values have biological plausibility (like income or albumin excretion rate) and a base 2 logarithmic transformation is a more logical way to proceed.
So far, so good.
Now look at the rate of rate of change of bone mineral density by age for women as reported in this article from the Canadian Medical Association Journal. Between ages 25 and 40 the rate of change of bone mineral density is positive. Then, between ages 40 and 60 the rate of change is negative. Suddenly, at age 60+, the rate of change becomes positive again. Yes, the first derivative of the relationship between age and bone mineral density is quadratic! That would rather imply that the actual relationship is cubic!!
Without seeing this sort of high quality descriptive data, I would have screamed "overfit" if I saw a cubic age term in a statistical model. As it is, I am having pessimistic thoughts about just have big the sample size needs to be in order to estimate the polynomial association between age and bone mineral density.
Now imagine that you are trying to remove confounding by age from another association; just how are you going to be sure that you don't have residual confounding?
Wow, juSt wow!
So far, so good.
Now look at the rate of rate of change of bone mineral density by age for women as reported in this article from the Canadian Medical Association Journal. Between ages 25 and 40 the rate of change of bone mineral density is positive. Then, between ages 40 and 60 the rate of change is negative. Suddenly, at age 60+, the rate of change becomes positive again. Yes, the first derivative of the relationship between age and bone mineral density is quadratic! That would rather imply that the actual relationship is cubic!!
Without seeing this sort of high quality descriptive data, I would have screamed "overfit" if I saw a cubic age term in a statistical model. As it is, I am having pessimistic thoughts about just have big the sample size needs to be in order to estimate the polynomial association between age and bone mineral density.
Now imagine that you are trying to remove confounding by age from another association; just how are you going to be sure that you don't have residual confounding?
Wow, juSt wow!
Monday, April 6, 2009
Stepwise Regression
Why is it so hard to get consensus on a good variable selection technique for exploratory models? The classic one, implemented everywhere and understood by everyone, is really sub-optimal (Stepwise regression -- I am looking at you). It seems to just love to include noise parameters which make any and all models difficult to explain.
Sure, you can build a model based on theory but what do you do when you want to know what factors might be associated with an outcome? And, of course, exploratory datasets tend to be the smallest possible data sets!!
Grrrr . . . .
Sure, you can build a model based on theory but what do you do when you want to know what factors might be associated with an outcome? And, of course, exploratory datasets tend to be the smallest possible data sets!!
Grrrr . . . .
Friday, April 3, 2009
Reporting results
Writing an effective scientific paper is an art form. It requires presenting a complex idea with about 3000 words. However, it is always the case that there are features of the study that are hard to describe cleanly with a small number of words. So what does one do?
If you go into detail then you inevitably confuse the reader. On the other hand, you want to produce the most complete report possible. Online supplements can help but only so far.
This happened to me in this paper:
Delaney JAC, Daskalopoulou SS, Suissa S. Traditional versus marginal structural models to estimate the effectiveness of β-blocker use on mortality after myocardial infarction. Pharmacoepidemiol Drug Saf 2009; 18(1):1-6.
There were two effects going on in the same paper. One, we were accounting for time dependent confounding. Two, we were switching from a conditional to a marginal estimate. Both of these changes contribute to the differences in estimates.
But does separating them increase or decrease confusion? If the goal is to find an analytic approach that is equivalent to a randomized controlled trial then the reasons why are less important.
My question is whether I'd aid understanding by pointing these subtle points out or if I would enhance confusion by bringing in tangential points. To this day I question which approach was correct!
If you go into detail then you inevitably confuse the reader. On the other hand, you want to produce the most complete report possible. Online supplements can help but only so far.
This happened to me in this paper:
Delaney JAC, Daskalopoulou SS, Suissa S. Traditional versus marginal structural models to estimate the effectiveness of β-blocker use on mortality after myocardial infarction. Pharmacoepidemiol Drug Saf 2009; 18(1):1-6.
There were two effects going on in the same paper. One, we were accounting for time dependent confounding. Two, we were switching from a conditional to a marginal estimate. Both of these changes contribute to the differences in estimates.
But does separating them increase or decrease confusion? If the goal is to find an analytic approach that is equivalent to a randomized controlled trial then the reasons why are less important.
My question is whether I'd aid understanding by pointing these subtle points out or if I would enhance confusion by bringing in tangential points. To this day I question which approach was correct!
Thursday, April 2, 2009
Budget Cuts
One of the hardest things to handle in the modern environment is that the academy is not designed for dynamism. It is based on a system where hard work and apprenticeship eventually lead to a slow succession of improvements. However, when circumstances can change rapidly this puts the whole system into flux. Positions and outcomes that people have labored for decades to achieve can now be at risk.
I think that this situation is less true for the very senior people and more for those "too far in to back out" but who have not "really made it yet". Basically, PhD Students, post-doctoral fellows and assistant professors are the class at risk and are often placed into situation that are Kafkaesque.
Now it is hard to argue for who deserves resources; "deserve" being such a complicated word. But it is pretty clear to me that the budget cuts to higher education (at least here in the state of Washington) look to be pretty deep. I suspect few companies would cut so deeply unless they were in crisis.
This leads me to ask: is higher education failing so badly that it should count as being in crisis?
But, either way, there really is not a lot of fat, per se, left to be cut. Schools seems to be in (generally) slightly poor repair. Equipment seems to old and a lot of work-arounds exist. Teaching resources are hardly in surplus. It could be that we are trying to do too much with too little. But that suggests rethinking of priorities -- not the brutal selection of massive reductions in budgets.
Or at least that is the way that I see it.
I think that this situation is less true for the very senior people and more for those "too far in to back out" but who have not "really made it yet". Basically, PhD Students, post-doctoral fellows and assistant professors are the class at risk and are often placed into situation that are Kafkaesque.
Now it is hard to argue for who deserves resources; "deserve" being such a complicated word. But it is pretty clear to me that the budget cuts to higher education (at least here in the state of Washington) look to be pretty deep. I suspect few companies would cut so deeply unless they were in crisis.
This leads me to ask: is higher education failing so badly that it should count as being in crisis?
But, either way, there really is not a lot of fat, per se, left to be cut. Schools seems to be in (generally) slightly poor repair. Equipment seems to old and a lot of work-arounds exist. Teaching resources are hardly in surplus. It could be that we are trying to do too much with too little. But that suggests rethinking of priorities -- not the brutal selection of massive reductions in budgets.
Or at least that is the way that I see it.
Wednesday, April 1, 2009
What to do in hard times
In this time of funding cuts and stress, what is the right reaction? I think that Professor in Training nails it: keep working! Stressing about what might happen is useful only insofar as it informs contingency planning. But, as a post-doctoral fellow, I have limited options that don't involve trying to develop a record of accomplishment.
One day I might get around to telling the long story of my attempts to move forward into the academy. But, for the moment, let me say that I like this attitude. Way better than assuming that disaster is coming and seeing that as an excuse for there to be at least some excitement in life. I really hope that I never get that cynical.
I like research but I admit that I also really liked working in the private sector. It had it's downsides, but I liked the dynamism and the idea that accomplishment was to be prized. I think that is why I have liked my current unit so much -- they have the same sort of culture!
One day I might get around to telling the long story of my attempts to move forward into the academy. But, for the moment, let me say that I like this attitude. Way better than assuming that disaster is coming and seeing that as an excuse for there to be at least some excitement in life. I really hope that I never get that cynical.
I like research but I admit that I also really liked working in the private sector. It had it's downsides, but I liked the dynamism and the idea that accomplishment was to be prized. I think that is why I have liked my current unit so much -- they have the same sort of culture!
Tuesday, March 31, 2009
OT: Dentists
I know that sometimes you need to inflict pain to make things better in the long run. And I am a happier person with the idea that I will have some sort of teeth in middle age.
But it is not pleasant to have three hours of solid dental drilling!!
:-(
But it is not pleasant to have three hours of solid dental drilling!!
:-(
Monday, March 30, 2009
More on Tenure
There is an interesting discussion on DrugMonkey about Tenure.
I think that the original comments are making a rather important point. High value research with a short term pay-off is ideally suited to the private sector. They have every advantage in pursuing these goals and lack the "overhead" of an academic institution. I know that discussions of comparative advantage can be complicated but this situation is one where the private sectors really are better poised to solve the questions.
The advantage of the academy is in long term planning and results. This environment gives stability and the ability to pursue dead ends. Even if the academy was better at some short term goals, it's still better to have it focus on the goals where the structure is advantaged relative to the private sector.
One argument against tenure involves the complicated issue of mandatory retirement. I think that this issue is not unique to academia and it is an independent issue from tenure. It is also unclear, in a world where pensions are so unstable, what the options are. Perhaps we need to reconsider ideas like seniority based salaries? I am not sure but I see this as a more general concern and only distantly related to the issue of tenure itself.
But the real issue seems to be whether or not the post-tenure world is good for the academy. I would argue that the answer is no. Perhaps I made a very bad decision to go back into academics at this time given the current pressures but I can't help but think that the levels of deprivation seen by junior academics are dysfunctional. Young Female Scientist talks about the sorts of deprivation that junior academics undergo; after a decade of such lowered standard of living why is it seen as being "lazy or dysfunctional" to want job security?
So I think that there are many good arguments for tenure and I think many of the "anti-tenure" arguments are red herrings.
I think that the original comments are making a rather important point. High value research with a short term pay-off is ideally suited to the private sector. They have every advantage in pursuing these goals and lack the "overhead" of an academic institution. I know that discussions of comparative advantage can be complicated but this situation is one where the private sectors really are better poised to solve the questions.
The advantage of the academy is in long term planning and results. This environment gives stability and the ability to pursue dead ends. Even if the academy was better at some short term goals, it's still better to have it focus on the goals where the structure is advantaged relative to the private sector.
One argument against tenure involves the complicated issue of mandatory retirement. I think that this issue is not unique to academia and it is an independent issue from tenure. It is also unclear, in a world where pensions are so unstable, what the options are. Perhaps we need to reconsider ideas like seniority based salaries? I am not sure but I see this as a more general concern and only distantly related to the issue of tenure itself.
But the real issue seems to be whether or not the post-tenure world is good for the academy. I would argue that the answer is no. Perhaps I made a very bad decision to go back into academics at this time given the current pressures but I can't help but think that the levels of deprivation seen by junior academics are dysfunctional. Young Female Scientist talks about the sorts of deprivation that junior academics undergo; after a decade of such lowered standard of living why is it seen as being "lazy or dysfunctional" to want job security?
So I think that there are many good arguments for tenure and I think many of the "anti-tenure" arguments are red herrings.
Saturday, March 28, 2009
Academic Positions
Thoreau (whom I just don't read enough) has a great post on the issues with academic positions in bio-medicine. The recent doubling of the NIH budget has made it possible for the number of academics to dramatically increase. This increase led to people having very unrealistic expectations about academic jobs. In was in Physics in the 1990's when there was a contraction in the field -- I think it is fair to say that the future of Bio-Medicine is about to have some of the sam tragic outcomes.
The worst part is that I don't even have a decent alternate plan.
The worst part is that I don't even have a decent alternate plan.
Cross Sectional Drug Effects
Probably the most frustrating thing in the life of a pharmacoepidemiologist is explaining why cross-sectional drug effects are impossible to estimate. People exposed to a drug at baseline have an outcome that is a composite of:
1) True Drug Effect
2) Underlying Disease Condition (indication for the drug)
It is impossible to separate these effects. So you have strange results when you analyze these data sets: such as anti-hypertensive medications often appear to increase blood pressure when you look at cross-sectional data.
This phenomenon makes it impossible to do any causal inference from a cross sectional drug study if the outcome is even remotely related to the indication. Much grief would be saved if we kept this feature of such studies in mind.
1) True Drug Effect
2) Underlying Disease Condition (indication for the drug)
It is impossible to separate these effects. So you have strange results when you analyze these data sets: such as anti-hypertensive medications often appear to increase blood pressure when you look at cross-sectional data.
This phenomenon makes it impossible to do any causal inference from a cross sectional drug study if the outcome is even remotely related to the indication. Much grief would be saved if we kept this feature of such studies in mind.
Thursday, March 26, 2009
Too many names
Has anybody else noticed that the conceptual idea of the difference in rates/incidence/prevalence of a disease in the exposed and unexposed has too many definitions?
I can think of papers that use: odds ratio, relative risk, cumulative incidence ratio, prevalence ratio, rate ratio, hazard ratio . . . All of which have subtle differences.
But none of which are used consistently.
I suspect that we could do a lot of good just to clean this terminology up!
I can think of papers that use: odds ratio, relative risk, cumulative incidence ratio, prevalence ratio, rate ratio, hazard ratio . . . All of which have subtle differences.
But none of which are used consistently.
I suspect that we could do a lot of good just to clean this terminology up!
Wednesday, March 25, 2009
Medication discontinuation
A lot of my work has been in the area of Pharmacoepidemiology. So it was with great interest that I read the commentary in the March 15th issue of the American Journal of Epidemiology by Dr. Noel Weiss. Dr. Weiss is a brilliant epidemiologist and so it is no surprise that his commentary clearly laid out the conceptual and practical problems associated with these studies.
The main problem is that people do not start (or stop) medication at random. They take medications to treat some underlying condition (thus leading to confounding by indication) and they stop for a number of reasons (including the treatment is completed). We know, for sure, that some drugs have withdrawal issues (consider morphine or SSRIs).
I've actually looked at this question with statin drug withdrawal and still worry about how successful we were at controlling for confounding factors (and, believe me, we did an enormous amount of analysis to see how robust the effect was).
But what is hard, in all of these studies. is separating the reason for stopping the drug from the outcome. If a participant stops an SSRI and has an increased risk of suicide is that a marker of:
1) The drug was not working to begin with
2) There were severe withdrawal issues
Separating these two factors is important! After all, if there is a period of increased danger than alternative monitoring for serious health events becomes an option.
But Dr. Weiss gives an excellent summary of all of the things that can go wrong in such analyses and why we need to be careful in interpreting them. So if you work in drug research at all, this article is definitely worth a look.
The main problem is that people do not start (or stop) medication at random. They take medications to treat some underlying condition (thus leading to confounding by indication) and they stop for a number of reasons (including the treatment is completed). We know, for sure, that some drugs have withdrawal issues (consider morphine or SSRIs).
I've actually looked at this question with statin drug withdrawal and still worry about how successful we were at controlling for confounding factors (and, believe me, we did an enormous amount of analysis to see how robust the effect was).
But what is hard, in all of these studies. is separating the reason for stopping the drug from the outcome. If a participant stops an SSRI and has an increased risk of suicide is that a marker of:
1) The drug was not working to begin with
2) There were severe withdrawal issues
Separating these two factors is important! After all, if there is a period of increased danger than alternative monitoring for serious health events becomes an option.
But Dr. Weiss gives an excellent summary of all of the things that can go wrong in such analyses and why we need to be careful in interpreting them. So if you work in drug research at all, this article is definitely worth a look.
Tuesday, March 24, 2009
Mistakes
We all make mistakes. Perfection, much as we would like it to be a standard part of the human condition, is not something that any one of us can claim. But, when doing science, it can happen and it is one of the hardest things to do to admit to a mistake. In all of the cases that have happened with me, it occurs during the sanity checking phase of an analysis.
But the problem is that finding issues with the analysis (i.e. did we define a quantity correctly) unnerves collaborators. Rather than being happy that we are having a good "back and forth discussion" and being pleased that the issues have come up early, it seems to shake confidence.
I think that this tendency to react badly to these issues is actually harmful. Mostly because it makes analysts less likely to speak up if they suspect that something is wrong. And, should I ever become an independent investigator, I really want people to speak up if there is an issue with data or analysis.
So I am not sure how to structure these interactions to provide the best possible outcome!
But the problem is that finding issues with the analysis (i.e. did we define a quantity correctly) unnerves collaborators. Rather than being happy that we are having a good "back and forth discussion" and being pleased that the issues have come up early, it seems to shake confidence.
I think that this tendency to react badly to these issues is actually harmful. Mostly because it makes analysts less likely to speak up if they suspect that something is wrong. And, should I ever become an independent investigator, I really want people to speak up if there is an issue with data or analysis.
So I am not sure how to structure these interactions to provide the best possible outcome!
Monday, March 23, 2009
Service work
The more epidemiology that I do, the less I like being an analyst. It is a very annoying job. People expect you to do data management, analysis and a fair chunk of the writing (methods section!) that is often the least interesting to craft. But there is a common tendency to do two things:
1) Make all decisions without talking to the analyst
2) Act like it is your fault if the data isn't ideal
I used to volunteer for these jobs because I thought that it would be fun and interesting to work with many different people. Now I must admit I am coming to loathe them!
1) Make all decisions without talking to the analyst
2) Act like it is your fault if the data isn't ideal
I used to volunteer for these jobs because I thought that it would be fun and interesting to work with many different people. Now I must admit I am coming to loathe them!
Friday, March 20, 2009
Academic positions
Thoreau has an interesting article about the perverse effects that can happen when trying to encourage diversity. What I am beginning to realize is that the academic career path is a high risk/moderate reward path. Some students will end up with decent jobs that give freedom and opportunities to think; others will end up in quite different areas than they intended. But the standard academic program has a lot of the characteristics of a pyramid scheme in that the goal that is placed before PhD students, tenured faculty positions, is increasingly unrealistic.
In epidemiology we are seeing an increasing reliance on soft money positions. I am not sure, in the long run, whether this reliance is a good or a bad feature. But it sure makes the implicit bargain of "years of sacrifice" being rewarded less obvious.
But the real issue with faculty seems to be the training versus placement issues. Most of the other concerns are perverse effects of that particular "elephant in the room".
In epidemiology we are seeing an increasing reliance on soft money positions. I am not sure, in the long run, whether this reliance is a good or a bad feature. But it sure makes the implicit bargain of "years of sacrifice" being rewarded less obvious.
But the real issue with faculty seems to be the training versus placement issues. Most of the other concerns are perverse effects of that particular "elephant in the room".
Thursday, March 19, 2009
Industry versus Academy
Mark sent me this article and I thought that it made an excellent point. Research is not right or wrong depending on the source. In this sense, there is a classic use of the Ad hominem fallacy occurring where we criticize the source of the research and not the content of the research.
I think that this brings up two issues, both important.
1) Transparency: A lot of the issues about epidemiological lie in the fact that methods are not standard or transparent. There is an art to doing an epidemiological study and no good mechanism in place to determine if the results were cherry-picked or the most honest representation of the ata possible.
2) Incentives: Let us be honest, everyone in the research game is motivated (at least in part) by financial incentives. I see a lot of researchers who are also motivated by a genuine desire to help solve serious medical problems. But I do not think that being in academia is a good test for this motivation as people go into industry for a lot of reasons.
But it is, without a doubt, in the best interest of a researcher to find something "interesting". One paper can make or break the transition from post-doctoral fellow to faculty. One paper can make or break a tenure application. This is not to say that fraud is widespread in the academy -- I rather expect that it is extremely rare.
But we are kidding ourselves if we do not admit that everyone with a career in research (whether in industry, government, or the academy) doesn't have at least some incentives to find exciting and interesting results. I think we mostly resist this temptation and focus on giving the most honest appraisal of the data that is possible; but we should never forget that all research can be driven by the rewards of actual discovery.
I think that this brings up two issues, both important.
1) Transparency: A lot of the issues about epidemiological lie in the fact that methods are not standard or transparent. There is an art to doing an epidemiological study and no good mechanism in place to determine if the results were cherry-picked or the most honest representation of the ata possible.
2) Incentives: Let us be honest, everyone in the research game is motivated (at least in part) by financial incentives. I see a lot of researchers who are also motivated by a genuine desire to help solve serious medical problems. But I do not think that being in academia is a good test for this motivation as people go into industry for a lot of reasons.
But it is, without a doubt, in the best interest of a researcher to find something "interesting". One paper can make or break the transition from post-doctoral fellow to faculty. One paper can make or break a tenure application. This is not to say that fraud is widespread in the academy -- I rather expect that it is extremely rare.
But we are kidding ourselves if we do not admit that everyone with a career in research (whether in industry, government, or the academy) doesn't have at least some incentives to find exciting and interesting results. I think we mostly resist this temptation and focus on giving the most honest appraisal of the data that is possible; but we should never forget that all research can be driven by the rewards of actual discovery.
Wednesday, March 18, 2009
Tenure
It is no surprise that, like any junior academic, I have seriously mixed feelings about the slow erosion of tenure. I must admit that I agree with the author of Confessions of a Community College Dean that the current academic system (two tracks) is a symptom of a system in decline.
What is not addressed there, but which is really relevant to my area, is how the loss of tenure changes my job. As a researcher (without tenure and with no prospect of tenure), my career is now dependent on getting funding or working on projects that get funding. In a real sense this is the death of the "freedom to explore" that originally lured me into the academy. Now, it is quite true that there was never a utopian time where professors and post-docs could diligently pursue their whims where-ever these might lead. Certainly modern teaching loads are completely different (at least in medical schools) which really does change the character of the job.
Still, it means that my career will now be spent responding to requests for funding in a broad range of government initiatives. Long periods of funding are five years and it is quite possible to have the more appealing types of grants last two years. This is actually less stable than the contract system that the University of Kentucky is implementing!
It is not that great research can't be done under these conditions. But it does really change the definition of stability. I never thought, when I left the banking industry in 2002, that I would have less employment stability. More curious, I seem to have about the same freedom to innovate (I can innovate insofar as it advances a pre-specified set of goals).
It's certainly food for thought.
What is not addressed there, but which is really relevant to my area, is how the loss of tenure changes my job. As a researcher (without tenure and with no prospect of tenure), my career is now dependent on getting funding or working on projects that get funding. In a real sense this is the death of the "freedom to explore" that originally lured me into the academy. Now, it is quite true that there was never a utopian time where professors and post-docs could diligently pursue their whims where-ever these might lead. Certainly modern teaching loads are completely different (at least in medical schools) which really does change the character of the job.
Still, it means that my career will now be spent responding to requests for funding in a broad range of government initiatives. Long periods of funding are five years and it is quite possible to have the more appealing types of grants last two years. This is actually less stable than the contract system that the University of Kentucky is implementing!
It is not that great research can't be done under these conditions. But it does really change the definition of stability. I never thought, when I left the banking industry in 2002, that I would have less employment stability. More curious, I seem to have about the same freedom to innovate (I can innovate insofar as it advances a pre-specified set of goals).
It's certainly food for thought.
Tuesday, March 17, 2009
Pre-specificied Analysis
One thing that I always find challenging is how to handle changes in the analytic plan. If you send the first of result to the writing group to be discussed and they come back with "wouldn't it make more sense if . . ." then what do you do?
In one sense, this sort of constructive feedback can improve our understanding of an association and improve the paper. On the other hand, this rather does make the "p-values" less clear. If you pick the association with the lowest p-value are you optimizing on how to best present an association or are you picking a result that is optimized on the distribution of noise in the data.
It is pretty clear to me that with a pre-specified test of an association that you should stick the analysis plan. But what if you are exploring? Is there a rule for exploratory analysis?
In one sense, this sort of constructive feedback can improve our understanding of an association and improve the paper. On the other hand, this rather does make the "p-values" less clear. If you pick the association with the lowest p-value are you optimizing on how to best present an association or are you picking a result that is optimized on the distribution of noise in the data.
It is pretty clear to me that with a pre-specified test of an association that you should stick the analysis plan. But what if you are exploring? Is there a rule for exploratory analysis?
OT: Firefly
I was visiting a coffee shop in Seattle that was decorated with a lot of Firefly themed pictures.
So today's post is entirely about good science fiction and the thing that everyone misses. The trick to good science fiction is to start with characters and writing that would work in a standard movie without any science fiction elements at all. Then the science fiction elements can enhance the story and add to the sense of wonder and possibility.
Star Wars could have been a story of a squire becoming a knight. Star Trek could have been a sailing ship in the age of discovery. Both are enhanced by science fiction elements.
But the series that may have had the best characters was Firefly. The newer Battlestar Galactica is trying to compete but the basic story of Firefly was interesting, rich and filled with characters we liked surprisingly quickly.
It really is a shame that we'll never get to see it end.
So today's post is entirely about good science fiction and the thing that everyone misses. The trick to good science fiction is to start with characters and writing that would work in a standard movie without any science fiction elements at all. Then the science fiction elements can enhance the story and add to the sense of wonder and possibility.
Star Wars could have been a story of a squire becoming a knight. Star Trek could have been a sailing ship in the age of discovery. Both are enhanced by science fiction elements.
But the series that may have had the best characters was Firefly. The newer Battlestar Galactica is trying to compete but the basic story of Firefly was interesting, rich and filled with characters we liked surprisingly quickly.
It really is a shame that we'll never get to see it end.
Friday, March 13, 2009
Irregular Observations
One phenomena that definitely annoys me is dealing with irregular observations. This occurs in contexts were data is passively collected based on when people get medical tests. For example, blood pressure is collected when you visit a medical doctor and this information can be used to assess trends in the population.
Here is the problem: people who have no readings often come from two very distinct groups. One is composed of very healthy people who simply have no need of medical services. The second is comprised of poor compliers who should seek medical care but don't. Obviously, the trajectory of these two groups is very different. And, equally obviously, it's hard to argue that these effects will cancel out in a real population.
Inference can still be done but it makes it hard to rule out subtle issues of bias.
Here is the problem: people who have no readings often come from two very distinct groups. One is composed of very healthy people who simply have no need of medical services. The second is comprised of poor compliers who should seek medical care but don't. Obviously, the trajectory of these two groups is very different. And, equally obviously, it's hard to argue that these effects will cancel out in a real population.
Inference can still be done but it makes it hard to rule out subtle issues of bias.
Thursday, March 12, 2009
Missing Data
Is there any issue that is more persistent and more difficult to solve than missing data?
It takes a perfectly good study and layers assumptions on it. There is a clear divide in how to handle it. One option is to argue "why would you not want to use real data" and rejects the assumptions of imputation. Of course, this approach makes it's own set of strong assumptions that are often not likely to be met.
So you'd think that doing the right thing and modeling the missing data is the way to go? Well, it's an improvement but it is pretty rare that the assumptions of missing data technique are met (missing at random is just not accurate in real data).
So what do you do? Most of the time I recommend modeling (inverse probability weighting or multiple imputation) but I must confess that the lack of a solution that is actually good is rather distressing!
It takes a perfectly good study and layers assumptions on it. There is a clear divide in how to handle it. One option is to argue "why would you not want to use real data" and rejects the assumptions of imputation. Of course, this approach makes it's own set of strong assumptions that are often not likely to be met.
So you'd think that doing the right thing and modeling the missing data is the way to go? Well, it's an improvement but it is pretty rare that the assumptions of missing data technique are met (missing at random is just not accurate in real data).
So what do you do? Most of the time I recommend modeling (inverse probability weighting or multiple imputation) but I must confess that the lack of a solution that is actually good is rather distressing!
Wednesday, March 11, 2009
A few thoughts on reviews
One of the most frustrating things that we have to face as researchers are reviews. We all want people to recognize the hard work that went into developing papers and grants. None of us got to the point that we were sending material out for review except by putting in many years and a startling amount of hard work.
So it is very annoying when strange reviews come back. But I have learned that there are a few basic rules:
Should the paper be rejected:
1) Decide if the criticisms are substantive or stylistic. If substantive, then you need to either redevelop the paper or retire it. This is never a pleasant discovery but, in the long run, you'll be glad that a sharp reviewer caught an issue. In my experiences, true substantive critiques are rare.
2) If the criticism is stylistic then don't put time into it. Likely the next set of reviewers will have different preferences. Resubmit rapidly.
3) If the criticism seems to apply to another paper, entirely, then seriously consider rewriting the confusing sections for clarity. You are the subject matter expert -- it is not reasonable that reviewers and readers will necessarily follow all of the nuances.
In the same vein, as a reviewer, asking for massive redevelopment for purely stylistic reasons is often a poor choice. Ask whether the paper is methodologically valid (no point in letting mistakes into the literature) and relevant. These are the real questions that need to be considered.
Peer Review is a frustrating process but it can really improve work if you take advantage of it.
So it is very annoying when strange reviews come back. But I have learned that there are a few basic rules:
Should the paper be rejected:
1) Decide if the criticisms are substantive or stylistic. If substantive, then you need to either redevelop the paper or retire it. This is never a pleasant discovery but, in the long run, you'll be glad that a sharp reviewer caught an issue. In my experiences, true substantive critiques are rare.
2) If the criticism is stylistic then don't put time into it. Likely the next set of reviewers will have different preferences. Resubmit rapidly.
3) If the criticism seems to apply to another paper, entirely, then seriously consider rewriting the confusing sections for clarity. You are the subject matter expert -- it is not reasonable that reviewers and readers will necessarily follow all of the nuances.
In the same vein, as a reviewer, asking for massive redevelopment for purely stylistic reasons is often a poor choice. Ask whether the paper is methodologically valid (no point in letting mistakes into the literature) and relevant. These are the real questions that need to be considered.
Peer Review is a frustrating process but it can really improve work if you take advantage of it.
Tuesday, March 10, 2009
Peer Review
There is an interesting post over at Drug Monkey on the issue of bias in peer review. I think that there are two issues that really strike me as important in peer review and it is easy to confuse them. One is the idea that individual reviewers will have preferences for the types of research that they like to see done. This issue is difficult, if not impossible, to solve.
Two, and more annoying, is the issue of competence and coherence in review. I cannot enumerate the number of reviews that I have gotten that had questionable elements. I remember one journal claiming that they could not publish a paper that had an "unverifiable assumption" in it. The assumption in question, no unmeasured confounders, was a pretty standard assumption for all research. Even clinical trials have this issue with loss to follow-up not necessarily being at random.
But the reviewer is protected from strong complaints of "what were you thinking?". Now, I too have certainly done peer reviews that could have been better. I think we all can think of examples of this. So I am not claiming to be "special" and I am sure that I have been cursed by an author more than once for not "getting it".
But I think that these concerns are what gives anonymous peer review it's bad name.
Two, and more annoying, is the issue of competence and coherence in review. I cannot enumerate the number of reviews that I have gotten that had questionable elements. I remember one journal claiming that they could not publish a paper that had an "unverifiable assumption" in it. The assumption in question, no unmeasured confounders, was a pretty standard assumption for all research. Even clinical trials have this issue with loss to follow-up not necessarily being at random.
But the reviewer is protected from strong complaints of "what were you thinking?". Now, I too have certainly done peer reviews that could have been better. I think we all can think of examples of this. So I am not claiming to be "special" and I am sure that I have been cursed by an author more than once for not "getting it".
But I think that these concerns are what gives anonymous peer review it's bad name.
Monday, March 9, 2009
NIH Challenge Grants
I am beginning to think that the NIH challenge grants are a cunningly disguised trap. The NIH is giving out $200 million for grants that can be up to $1 million apiece and do not require preliminary data. This set-up could generate a lot of applications.
I think that it might make more sense to put that effort into a very solid R01 grant proposal and try to win longer term funding under less pressure of time and with more opportunity to generate productivity.
But, of course, the siren call of "no pilot data" is certainly sounding in my ears too!
I think that it might make more sense to put that effort into a very solid R01 grant proposal and try to win longer term funding under less pressure of time and with more opportunity to generate productivity.
But, of course, the siren call of "no pilot data" is certainly sounding in my ears too!
Sunday, March 8, 2009
A reply to Andrew Gelman
A reply to Andrew Gelman's latest post where he links to an old post on propensity scores:
My understanding of the issue is that there was also a prevalent user problem (creating selection bias) at least partially due to time-varying risk. While this could have been found and modeled, I am unsure about how propensity scores give any advantage over a thoughtfully constructed regression model. Unless the study you are thinking of had a lot more power to estimate predictors of exposure than outcomes due to very few outcomes (but I don't believe that this was the case with the Nurse's Health Study).
I'm not saying that better statistical models shouldn't be used but I worry about overstating the benefits of propensity score analysis. It's an extremely good technique, no question about it, and I've published on one of it's variations. But I want to be very sure that we don't miss issues of study design and bias in the process.
Issues of self-selection seriously limit all observational epidemiology. The issue is serious enough that I often wonder if we should not use observational studies to estimate medication benefits (at all). It's just too misleading.
This minor point of disagreement aside, I freely admit that Andrew Gelman is one of my heroes in the statistical community. I love some of his posts. His work on statistical significance is incredibly thought provoking, very helpful in clarifying thought and a must read for any epidemiologist.,
My understanding of the issue is that there was also a prevalent user problem (creating selection bias) at least partially due to time-varying risk. While this could have been found and modeled, I am unsure about how propensity scores give any advantage over a thoughtfully constructed regression model. Unless the study you are thinking of had a lot more power to estimate predictors of exposure than outcomes due to very few outcomes (but I don't believe that this was the case with the Nurse's Health Study).
I'm not saying that better statistical models shouldn't be used but I worry about overstating the benefits of propensity score analysis. It's an extremely good technique, no question about it, and I've published on one of it's variations. But I want to be very sure that we don't miss issues of study design and bias in the process.
Issues of self-selection seriously limit all observational epidemiology. The issue is serious enough that I often wonder if we should not use observational studies to estimate medication benefits (at all). It's just too misleading.
This minor point of disagreement aside, I freely admit that Andrew Gelman is one of my heroes in the statistical community. I love some of his posts. His work on statistical significance is incredibly thought provoking, very helpful in clarifying thought and a must read for any epidemiologist.,
Saturday, March 7, 2009
Self Selection
In a lot of ways, I think that different forms of self selection are the biggest threat to study validity in observational epidemiology. We see it in loss to follow-up when participants select out of clinical trials. We see it in important exposures like diet, exercise and alcohol use where the exposure is likely correlated with many other health seeking behaviors. Heck, we know that being adherent to placebo therapy is associated with good outcomes.
So the trick seems to isolating the effect of a single exposure. It is the process of thinking up ways to do this isolation that allows epidemiologists to really earn their keep.
So the trick seems to isolating the effect of a single exposure. It is the process of thinking up ways to do this isolation that allows epidemiologists to really earn their keep.
Friday, March 6, 2009
Why do observational epidemiology?
Observational epidemiology studies are often the source of highly misleading results. And yet, despite this problem, they are essential to the better understanding of human health. There are many exposures, some of them quite critical, that cannot be studied in any other way.
My personal favorite is adverse drug effects. Clinical trials are often underpowered to detect adverse events; in order to show these effects trials often need to be combined. Given the logistics involved, it is helpful to show an association between the drug in question and adverse events in real populations.
I hope to discuss the many interesting challenges and ideas in this field of research as I try to muddle towards some sort of resolution in a confusing sea of fuzzy data.
My personal favorite is adverse drug effects. Clinical trials are often underpowered to detect adverse events; in order to show these effects trials often need to be combined. Given the logistics involved, it is helpful to show an association between the drug in question and adverse events in real populations.
I hope to discuss the many interesting challenges and ideas in this field of research as I try to muddle towards some sort of resolution in a confusing sea of fuzzy data.
Subscribe to:
Posts (Atom)