This is Joseph.
One of the most challenging things in population research is the need to trust in the data and analysis done by other groups. Unlike chemistry, we cannot simply replicate experiments without huge amounts of expense. Furthermore, the population is getting less and less responsive to surveys. In a very real sense, endlessly replicating strong and clean results is going to partially displace other research questions. After all, people have a limited tolerance for surveys. This goes double for high burden approaches, such as door-to-door interviews and interventions (which require trained and paid field agents to conduct the survey with a high degree of professionalism and often limited wages).
This need to trust, where possible, makes stories like this one painful. Full respect to the field that the problem was detected and I am glad that academia was self correcting. But these actions have pretty strong consequences
I also think there is a very important difference between a technical error, misunderstanding of data, and completely making data up. The first two are the cases that give every analyst nightmares. But the last seems to have no excuses at all -- how could somebody not know that they were faking data?
That said, it's not like medicine is innocent (as Thomas Lumley points out) and medical research probably has a lot more direct potential to cause harm (as patient concerns about this "treatment is not working" will be dismissed in the face of "randomized controlled trial" "evidence").
EDIT: and how could I overlook Andrew Gelman's take on this (which is right in his area)
No comments:
Post a Comment