I know that sometimes you need to inflict pain to make things better in the long run. And I am a happier person with the idea that I will have some sort of teeth in middle age.
But it is not pleasant to have three hours of solid dental drilling!!
:-(
Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Tuesday, March 31, 2009
Monday, March 30, 2009
More on Tenure
There is an interesting discussion on DrugMonkey about Tenure.
I think that the original comments are making a rather important point. High value research with a short term pay-off is ideally suited to the private sector. They have every advantage in pursuing these goals and lack the "overhead" of an academic institution. I know that discussions of comparative advantage can be complicated but this situation is one where the private sectors really are better poised to solve the questions.
The advantage of the academy is in long term planning and results. This environment gives stability and the ability to pursue dead ends. Even if the academy was better at some short term goals, it's still better to have it focus on the goals where the structure is advantaged relative to the private sector.
One argument against tenure involves the complicated issue of mandatory retirement. I think that this issue is not unique to academia and it is an independent issue from tenure. It is also unclear, in a world where pensions are so unstable, what the options are. Perhaps we need to reconsider ideas like seniority based salaries? I am not sure but I see this as a more general concern and only distantly related to the issue of tenure itself.
But the real issue seems to be whether or not the post-tenure world is good for the academy. I would argue that the answer is no. Perhaps I made a very bad decision to go back into academics at this time given the current pressures but I can't help but think that the levels of deprivation seen by junior academics are dysfunctional. Young Female Scientist talks about the sorts of deprivation that junior academics undergo; after a decade of such lowered standard of living why is it seen as being "lazy or dysfunctional" to want job security?
So I think that there are many good arguments for tenure and I think many of the "anti-tenure" arguments are red herrings.
I think that the original comments are making a rather important point. High value research with a short term pay-off is ideally suited to the private sector. They have every advantage in pursuing these goals and lack the "overhead" of an academic institution. I know that discussions of comparative advantage can be complicated but this situation is one where the private sectors really are better poised to solve the questions.
The advantage of the academy is in long term planning and results. This environment gives stability and the ability to pursue dead ends. Even if the academy was better at some short term goals, it's still better to have it focus on the goals where the structure is advantaged relative to the private sector.
One argument against tenure involves the complicated issue of mandatory retirement. I think that this issue is not unique to academia and it is an independent issue from tenure. It is also unclear, in a world where pensions are so unstable, what the options are. Perhaps we need to reconsider ideas like seniority based salaries? I am not sure but I see this as a more general concern and only distantly related to the issue of tenure itself.
But the real issue seems to be whether or not the post-tenure world is good for the academy. I would argue that the answer is no. Perhaps I made a very bad decision to go back into academics at this time given the current pressures but I can't help but think that the levels of deprivation seen by junior academics are dysfunctional. Young Female Scientist talks about the sorts of deprivation that junior academics undergo; after a decade of such lowered standard of living why is it seen as being "lazy or dysfunctional" to want job security?
So I think that there are many good arguments for tenure and I think many of the "anti-tenure" arguments are red herrings.
Saturday, March 28, 2009
Academic Positions
Thoreau (whom I just don't read enough) has a great post on the issues with academic positions in bio-medicine. The recent doubling of the NIH budget has made it possible for the number of academics to dramatically increase. This increase led to people having very unrealistic expectations about academic jobs. In was in Physics in the 1990's when there was a contraction in the field -- I think it is fair to say that the future of Bio-Medicine is about to have some of the sam tragic outcomes.
The worst part is that I don't even have a decent alternate plan.
The worst part is that I don't even have a decent alternate plan.
Cross Sectional Drug Effects
Probably the most frustrating thing in the life of a pharmacoepidemiologist is explaining why cross-sectional drug effects are impossible to estimate. People exposed to a drug at baseline have an outcome that is a composite of:
1) True Drug Effect
2) Underlying Disease Condition (indication for the drug)
It is impossible to separate these effects. So you have strange results when you analyze these data sets: such as anti-hypertensive medications often appear to increase blood pressure when you look at cross-sectional data.
This phenomenon makes it impossible to do any causal inference from a cross sectional drug study if the outcome is even remotely related to the indication. Much grief would be saved if we kept this feature of such studies in mind.
1) True Drug Effect
2) Underlying Disease Condition (indication for the drug)
It is impossible to separate these effects. So you have strange results when you analyze these data sets: such as anti-hypertensive medications often appear to increase blood pressure when you look at cross-sectional data.
This phenomenon makes it impossible to do any causal inference from a cross sectional drug study if the outcome is even remotely related to the indication. Much grief would be saved if we kept this feature of such studies in mind.
Thursday, March 26, 2009
Too many names
Has anybody else noticed that the conceptual idea of the difference in rates/incidence/prevalence of a disease in the exposed and unexposed has too many definitions?
I can think of papers that use: odds ratio, relative risk, cumulative incidence ratio, prevalence ratio, rate ratio, hazard ratio . . . All of which have subtle differences.
But none of which are used consistently.
I suspect that we could do a lot of good just to clean this terminology up!
I can think of papers that use: odds ratio, relative risk, cumulative incidence ratio, prevalence ratio, rate ratio, hazard ratio . . . All of which have subtle differences.
But none of which are used consistently.
I suspect that we could do a lot of good just to clean this terminology up!
Wednesday, March 25, 2009
Medication discontinuation
A lot of my work has been in the area of Pharmacoepidemiology. So it was with great interest that I read the commentary in the March 15th issue of the American Journal of Epidemiology by Dr. Noel Weiss. Dr. Weiss is a brilliant epidemiologist and so it is no surprise that his commentary clearly laid out the conceptual and practical problems associated with these studies.
The main problem is that people do not start (or stop) medication at random. They take medications to treat some underlying condition (thus leading to confounding by indication) and they stop for a number of reasons (including the treatment is completed). We know, for sure, that some drugs have withdrawal issues (consider morphine or SSRIs).
I've actually looked at this question with statin drug withdrawal and still worry about how successful we were at controlling for confounding factors (and, believe me, we did an enormous amount of analysis to see how robust the effect was).
But what is hard, in all of these studies. is separating the reason for stopping the drug from the outcome. If a participant stops an SSRI and has an increased risk of suicide is that a marker of:
1) The drug was not working to begin with
2) There were severe withdrawal issues
Separating these two factors is important! After all, if there is a period of increased danger than alternative monitoring for serious health events becomes an option.
But Dr. Weiss gives an excellent summary of all of the things that can go wrong in such analyses and why we need to be careful in interpreting them. So if you work in drug research at all, this article is definitely worth a look.
The main problem is that people do not start (or stop) medication at random. They take medications to treat some underlying condition (thus leading to confounding by indication) and they stop for a number of reasons (including the treatment is completed). We know, for sure, that some drugs have withdrawal issues (consider morphine or SSRIs).
I've actually looked at this question with statin drug withdrawal and still worry about how successful we were at controlling for confounding factors (and, believe me, we did an enormous amount of analysis to see how robust the effect was).
But what is hard, in all of these studies. is separating the reason for stopping the drug from the outcome. If a participant stops an SSRI and has an increased risk of suicide is that a marker of:
1) The drug was not working to begin with
2) There were severe withdrawal issues
Separating these two factors is important! After all, if there is a period of increased danger than alternative monitoring for serious health events becomes an option.
But Dr. Weiss gives an excellent summary of all of the things that can go wrong in such analyses and why we need to be careful in interpreting them. So if you work in drug research at all, this article is definitely worth a look.
Tuesday, March 24, 2009
Mistakes
We all make mistakes. Perfection, much as we would like it to be a standard part of the human condition, is not something that any one of us can claim. But, when doing science, it can happen and it is one of the hardest things to do to admit to a mistake. In all of the cases that have happened with me, it occurs during the sanity checking phase of an analysis.
But the problem is that finding issues with the analysis (i.e. did we define a quantity correctly) unnerves collaborators. Rather than being happy that we are having a good "back and forth discussion" and being pleased that the issues have come up early, it seems to shake confidence.
I think that this tendency to react badly to these issues is actually harmful. Mostly because it makes analysts less likely to speak up if they suspect that something is wrong. And, should I ever become an independent investigator, I really want people to speak up if there is an issue with data or analysis.
So I am not sure how to structure these interactions to provide the best possible outcome!
But the problem is that finding issues with the analysis (i.e. did we define a quantity correctly) unnerves collaborators. Rather than being happy that we are having a good "back and forth discussion" and being pleased that the issues have come up early, it seems to shake confidence.
I think that this tendency to react badly to these issues is actually harmful. Mostly because it makes analysts less likely to speak up if they suspect that something is wrong. And, should I ever become an independent investigator, I really want people to speak up if there is an issue with data or analysis.
So I am not sure how to structure these interactions to provide the best possible outcome!
Monday, March 23, 2009
Service work
The more epidemiology that I do, the less I like being an analyst. It is a very annoying job. People expect you to do data management, analysis and a fair chunk of the writing (methods section!) that is often the least interesting to craft. But there is a common tendency to do two things:
1) Make all decisions without talking to the analyst
2) Act like it is your fault if the data isn't ideal
I used to volunteer for these jobs because I thought that it would be fun and interesting to work with many different people. Now I must admit I am coming to loathe them!
1) Make all decisions without talking to the analyst
2) Act like it is your fault if the data isn't ideal
I used to volunteer for these jobs because I thought that it would be fun and interesting to work with many different people. Now I must admit I am coming to loathe them!
Friday, March 20, 2009
Academic positions
Thoreau has an interesting article about the perverse effects that can happen when trying to encourage diversity. What I am beginning to realize is that the academic career path is a high risk/moderate reward path. Some students will end up with decent jobs that give freedom and opportunities to think; others will end up in quite different areas than they intended. But the standard academic program has a lot of the characteristics of a pyramid scheme in that the goal that is placed before PhD students, tenured faculty positions, is increasingly unrealistic.
In epidemiology we are seeing an increasing reliance on soft money positions. I am not sure, in the long run, whether this reliance is a good or a bad feature. But it sure makes the implicit bargain of "years of sacrifice" being rewarded less obvious.
But the real issue with faculty seems to be the training versus placement issues. Most of the other concerns are perverse effects of that particular "elephant in the room".
In epidemiology we are seeing an increasing reliance on soft money positions. I am not sure, in the long run, whether this reliance is a good or a bad feature. But it sure makes the implicit bargain of "years of sacrifice" being rewarded less obvious.
But the real issue with faculty seems to be the training versus placement issues. Most of the other concerns are perverse effects of that particular "elephant in the room".
Thursday, March 19, 2009
Industry versus Academy
Mark sent me this article and I thought that it made an excellent point. Research is not right or wrong depending on the source. In this sense, there is a classic use of the Ad hominem fallacy occurring where we criticize the source of the research and not the content of the research.
I think that this brings up two issues, both important.
1) Transparency: A lot of the issues about epidemiological lie in the fact that methods are not standard or transparent. There is an art to doing an epidemiological study and no good mechanism in place to determine if the results were cherry-picked or the most honest representation of the ata possible.
2) Incentives: Let us be honest, everyone in the research game is motivated (at least in part) by financial incentives. I see a lot of researchers who are also motivated by a genuine desire to help solve serious medical problems. But I do not think that being in academia is a good test for this motivation as people go into industry for a lot of reasons.
But it is, without a doubt, in the best interest of a researcher to find something "interesting". One paper can make or break the transition from post-doctoral fellow to faculty. One paper can make or break a tenure application. This is not to say that fraud is widespread in the academy -- I rather expect that it is extremely rare.
But we are kidding ourselves if we do not admit that everyone with a career in research (whether in industry, government, or the academy) doesn't have at least some incentives to find exciting and interesting results. I think we mostly resist this temptation and focus on giving the most honest appraisal of the data that is possible; but we should never forget that all research can be driven by the rewards of actual discovery.
I think that this brings up two issues, both important.
1) Transparency: A lot of the issues about epidemiological lie in the fact that methods are not standard or transparent. There is an art to doing an epidemiological study and no good mechanism in place to determine if the results were cherry-picked or the most honest representation of the ata possible.
2) Incentives: Let us be honest, everyone in the research game is motivated (at least in part) by financial incentives. I see a lot of researchers who are also motivated by a genuine desire to help solve serious medical problems. But I do not think that being in academia is a good test for this motivation as people go into industry for a lot of reasons.
But it is, without a doubt, in the best interest of a researcher to find something "interesting". One paper can make or break the transition from post-doctoral fellow to faculty. One paper can make or break a tenure application. This is not to say that fraud is widespread in the academy -- I rather expect that it is extremely rare.
But we are kidding ourselves if we do not admit that everyone with a career in research (whether in industry, government, or the academy) doesn't have at least some incentives to find exciting and interesting results. I think we mostly resist this temptation and focus on giving the most honest appraisal of the data that is possible; but we should never forget that all research can be driven by the rewards of actual discovery.
Wednesday, March 18, 2009
Tenure
It is no surprise that, like any junior academic, I have seriously mixed feelings about the slow erosion of tenure. I must admit that I agree with the author of Confessions of a Community College Dean that the current academic system (two tracks) is a symptom of a system in decline.
What is not addressed there, but which is really relevant to my area, is how the loss of tenure changes my job. As a researcher (without tenure and with no prospect of tenure), my career is now dependent on getting funding or working on projects that get funding. In a real sense this is the death of the "freedom to explore" that originally lured me into the academy. Now, it is quite true that there was never a utopian time where professors and post-docs could diligently pursue their whims where-ever these might lead. Certainly modern teaching loads are completely different (at least in medical schools) which really does change the character of the job.
Still, it means that my career will now be spent responding to requests for funding in a broad range of government initiatives. Long periods of funding are five years and it is quite possible to have the more appealing types of grants last two years. This is actually less stable than the contract system that the University of Kentucky is implementing!
It is not that great research can't be done under these conditions. But it does really change the definition of stability. I never thought, when I left the banking industry in 2002, that I would have less employment stability. More curious, I seem to have about the same freedom to innovate (I can innovate insofar as it advances a pre-specified set of goals).
It's certainly food for thought.
What is not addressed there, but which is really relevant to my area, is how the loss of tenure changes my job. As a researcher (without tenure and with no prospect of tenure), my career is now dependent on getting funding or working on projects that get funding. In a real sense this is the death of the "freedom to explore" that originally lured me into the academy. Now, it is quite true that there was never a utopian time where professors and post-docs could diligently pursue their whims where-ever these might lead. Certainly modern teaching loads are completely different (at least in medical schools) which really does change the character of the job.
Still, it means that my career will now be spent responding to requests for funding in a broad range of government initiatives. Long periods of funding are five years and it is quite possible to have the more appealing types of grants last two years. This is actually less stable than the contract system that the University of Kentucky is implementing!
It is not that great research can't be done under these conditions. But it does really change the definition of stability. I never thought, when I left the banking industry in 2002, that I would have less employment stability. More curious, I seem to have about the same freedom to innovate (I can innovate insofar as it advances a pre-specified set of goals).
It's certainly food for thought.
Tuesday, March 17, 2009
Pre-specificied Analysis
One thing that I always find challenging is how to handle changes in the analytic plan. If you send the first of result to the writing group to be discussed and they come back with "wouldn't it make more sense if . . ." then what do you do?
In one sense, this sort of constructive feedback can improve our understanding of an association and improve the paper. On the other hand, this rather does make the "p-values" less clear. If you pick the association with the lowest p-value are you optimizing on how to best present an association or are you picking a result that is optimized on the distribution of noise in the data.
It is pretty clear to me that with a pre-specified test of an association that you should stick the analysis plan. But what if you are exploring? Is there a rule for exploratory analysis?
In one sense, this sort of constructive feedback can improve our understanding of an association and improve the paper. On the other hand, this rather does make the "p-values" less clear. If you pick the association with the lowest p-value are you optimizing on how to best present an association or are you picking a result that is optimized on the distribution of noise in the data.
It is pretty clear to me that with a pre-specified test of an association that you should stick the analysis plan. But what if you are exploring? Is there a rule for exploratory analysis?
OT: Firefly
I was visiting a coffee shop in Seattle that was decorated with a lot of Firefly themed pictures.
So today's post is entirely about good science fiction and the thing that everyone misses. The trick to good science fiction is to start with characters and writing that would work in a standard movie without any science fiction elements at all. Then the science fiction elements can enhance the story and add to the sense of wonder and possibility.
Star Wars could have been a story of a squire becoming a knight. Star Trek could have been a sailing ship in the age of discovery. Both are enhanced by science fiction elements.
But the series that may have had the best characters was Firefly. The newer Battlestar Galactica is trying to compete but the basic story of Firefly was interesting, rich and filled with characters we liked surprisingly quickly.
It really is a shame that we'll never get to see it end.
So today's post is entirely about good science fiction and the thing that everyone misses. The trick to good science fiction is to start with characters and writing that would work in a standard movie without any science fiction elements at all. Then the science fiction elements can enhance the story and add to the sense of wonder and possibility.
Star Wars could have been a story of a squire becoming a knight. Star Trek could have been a sailing ship in the age of discovery. Both are enhanced by science fiction elements.
But the series that may have had the best characters was Firefly. The newer Battlestar Galactica is trying to compete but the basic story of Firefly was interesting, rich and filled with characters we liked surprisingly quickly.
It really is a shame that we'll never get to see it end.
Friday, March 13, 2009
Irregular Observations
One phenomena that definitely annoys me is dealing with irregular observations. This occurs in contexts were data is passively collected based on when people get medical tests. For example, blood pressure is collected when you visit a medical doctor and this information can be used to assess trends in the population.
Here is the problem: people who have no readings often come from two very distinct groups. One is composed of very healthy people who simply have no need of medical services. The second is comprised of poor compliers who should seek medical care but don't. Obviously, the trajectory of these two groups is very different. And, equally obviously, it's hard to argue that these effects will cancel out in a real population.
Inference can still be done but it makes it hard to rule out subtle issues of bias.
Here is the problem: people who have no readings often come from two very distinct groups. One is composed of very healthy people who simply have no need of medical services. The second is comprised of poor compliers who should seek medical care but don't. Obviously, the trajectory of these two groups is very different. And, equally obviously, it's hard to argue that these effects will cancel out in a real population.
Inference can still be done but it makes it hard to rule out subtle issues of bias.
Subscribe to:
Posts (Atom)