Thursday, December 16, 2010

The Homebase study looks OK but the New York Times is still a mess

Alex Tabarrok and Joseph both have posts up on a study to determine the effectiveness of a program to prevent homelessness by randomly accepting 200 of the 400 applicants the program received last summer then comparing how the accepted fare compared to the rejected. This isn't exactly how I'd set the study up but the choices the researchers seem both reasonable and ethical. As one of the researchers pointed out, this was not an entitlement; it is a small program that rejects some applicants already. All the researchers are doing is rearranging the pool.

If everything is as it seems, my only major concern is that, given our current exceptionally bad economic conditions, the results from this study might not generalize well.

But the word 'seems' is important because the NYT story that all of these posts have been based on simply isn't informative enough or well enough written for the reader to manage an informed opinion.

The story starts out ominously with this paragraph:
It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.
It might not be reasonable to expect a dissertation on the distinction between double-blind and open-label studies, but given that the subject of the article is the effectiveness and ethics of certain kinds of open-label studies, the fact that the writer may not know that there is a distinction does not bode well.

It does, however, bode accurately because the writer apparently proceeds to blur a much more important distinction, that between pilot and ongoing programs:
Such trials, while not new, are becoming especially popular in developing countries. In India, for example, researchers using a controlled trial found that installing cameras in classrooms reduced teacher absenteeism at rural schools. Children given deworming treatment in Kenya ended up having better attendance at school and growing taller.
These are pilot programs. The Indian government didn't install cameras in all their rural schools then go to the expense of randomly removing half of them, nor did the Kenyans suddenly discontinue preventative care from half their children. From a practical, ethical and analytic perspective, going from no one gets a treatment to a randomly selected sample get a treatment is radically different than going from everyone gets a treatment to a randomly selected sample get a treatment.

Putting aside the obvious practical and ethical points, the analytic approach to an ongoing program is different because you start with a great deal of data. You don't have to be a Bayesian to believe that data from other sources should affect the decisions a statistician makes, choices ranging from prioritizing to deciding what to study to designing experiments. Statisticians never work in a vacuum.

There is little doubt that the best data on this program that we could reasonably hope for would come from some kind of open-label study with random assignment but, given the inevitable concerns and caveats that go with open-label studies, exactly how much better would that data be? What kind of data came out the original pilot study? What kind of data do we have on similar programs across the country? And most importantly, what's the magnitude of the effect we seem to be seeing?

On that topic we get the following piece of he said/she said:
Advocates for the homeless said they were puzzled about why the trial was necessary, since the city proclaimed the Homebase program as “highly successful” in the September 2010 Mayor’s Management Report, saying that over 90 percent of families that received help from Homebase did not end up in homeless shelters.

...

But Seth Diamond, commissioner of the Homeless Services Department, said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.
Before we can ask if this proposed selection effect can explain the impact of Homebase, it would help if we had the slightest idea what that impact was. It's true that over 90 percent of the families in question did not end up in shelters. It is also true that 99.99 percent of the people who took homeopathic remedies for their colds did recover. Neither number is particularly useful without the proper context.

The proper context shouldn't be that difficult to find. We have more than enough information to estimate the likelihood of families in these financial situations ending up in shelters. Of course, there is no upper bound on a possible selection effect, but I'm going to weight the possibility differently if the comparison rate turns out to be 80 percent than I would if it were 40 percent.

None of this is a criticism of the actual research. Other than my concern about generalizing 2010 economic data, this looks like a good study. Governments and large organizations should always be on the lookout for ways that experimental design and sampling theory can improve our data.

But I can't say with any certainty that this is a good place for this kind data gathering because I'm getting my information from a badly written story. There are three papers I read frequently: the LA Times, the New York Times and the Wall Street Journal, and of those, the one that is most likely to run lazy, low-context, he said/she said pieces is the old gray lady.

No comments:

Post a Comment