Wednesday, April 22, 2020

The politics of covid-19 start to bite

This is Joseph

This viral facebook post is excellent:
It's about making sure people can't file unemployment. It isn't about saving lives, certainly. It's not about the peak of the curve. I think lots of people are going to ignore the governor and stay home regardless. This isn't a decision being driven by epidemiology. It's the rawest and most lethal of political decisions, and it will kill people.
Kemp is looking forward to the fiscal discussion in 2021 and 2022, when all of this really starts to hit. He got elected by out-yahooing the field. His base has been trained to view government spending as a crime, and he knows that he becomes politically vulnerable to an attack if he raises taxes. He is not capable of delivering a nuanced message around necessity, because his base doesn't know how to hear it.
and
Georgians did the Kansas thing a couple of years ago and instituted a hard constitutional limit on income taxes of 6 percent. It cannot go higher without amending the state constitution. What that means is that there's no easy mechanism for the state to accommodate an extraordinary expense, like this, without somehow telling Republican reactionaries that they must raise taxes. 
I really do look forward to the day that the right-wing has ideas other than "cut taxes" and "taxes bad". It is not that anybody likes taxes. I don't like many things that are necessary: queues, licenses, car service visits, etc . . . But there is a real lack of any idea of how to respond to an emergency; imagine a war today with the fear that raising taxes might be needed to win.

That said, it is a pity there isn't a union of states that could raise revenue, float debt, and step in to mitigate the financial crisis.

Why low estimates matter

This is Joseph

People wonder why I am so concerned about the preprints that are suggesting much higher levels of infection (and thus lower fatality rates) than expected. Consider this example:

and
The low estimate of 500 was used to justify a policy response to the covid-19 epidemic. The current number as of April 20th is 45,013 reported US deaths.

But we need to be careful about predictions that can change. This study suggests a lower infection fatality rate then previous studies show:
We can use our prevalence estimates to approximate the infection fatality rate from COVID-19 in Santa Clara County. As of April 10, 2020, 50 people have died of COVID-19 in the County, with an average increase of 6% daily in the number of deaths. If our estimates of 48,000-81,000 infections represent the cumulative total on April 1, and we project deaths to April 22 (a 3 week lag from time of infection to death), we estimate about 100 deaths in the county. A hundred deaths out of 48,000-81,000 infections corresponds to an infection fatality rate of 0.12-0.2%. If antibodies take longer than 3 days to appear, if the average duration from case identification to death is less than 3 weeks, or if the epidemic wave has peaked and growth in deaths is less than 6% daily, then the infection fatality rate would be lower. These straightforward estimations of infection fatality rate fail to account for age structure and changing treatment approaches to COVID-19. Nevertheless, our prevalence estimates can be used to update existing fatality rates given the large upwards revision of under-ascertainment.
 Now, we all do "back of the envelope" calculations. This is me doing some. But we need to be careful. In a press release, the senior author pointed out that this calculation made the IFR of covid-19 about that of the flu. Like these are actual headlines. But this would be a massive issue, if true, as the decisions we are making depend on a higher IFR. But there are concerns with this research and there is a second study that appears to be only via press release.

Now, let me be clear, I would be deliriously happy if these studies were correct. I would feel much better about loosening the lock-down and "taking it on the chin", as Boris Johnson said. These super low rates of fatality would shift the conversation about the economy, as well as suggesting people will go back to movie theaters because we'll quickly all be immune.

But these numbers don't seem compatible with New York, Italy, Spain, or the careful studies in Iceland. Corrections to the Reason story note the NY problem. This doesn't mean we don't have a lot of asymptomatic infections, we do.

But it is important that we not base policy on numbers that can be rapidly revised and quite different when they are eventually put into the record. The reason I started with the article above is that it is easy to dismiss the epidemic. I would like there not to be one too! But if you are going to argue, either way, in an official capacity then there should be some serious accountability if the estimates are way off, in a way that a statistician helping might easily fix.

Tuesday, April 21, 2020

Tuesday Tweets -- Dr. Elon will see you now

I have to admit it took me a while to realize to what degree Elon Musk learned his lines phonetically. In my defense, it was more difficult t tell at first. Musk would say lots of smart, technically savvy things then throw in something stupid, but lots of intelligent people have sudden dips when they wander out of their areas of expertise. It wasn't until Musk started routinely started going off script that it became clear he had no real understanding of anything scientific or technical. The illusion of comprehension was only maintained while he was repeating what the genuinely sharp engineers at SpaceX and Tesla had told him to say.

When he has to rely on Google, it doesn't work quite as well.

The latest example started when Musk, under fire for keeping his Tesla plant open during the pandemic, insisted he could make all the ventilators our ICUs need. Normally these promises are quickly forgotten but people kept bringing this one up, particularly when...
So the promise to build was quietly changed to a promise to provide.

And then the definition of "ventilator" was broadened.





  Musk responded with the full force of his expertise.



At which point, Elon Musk abruptly left the conversation.
But the conversation did not leave him.




Monday, April 20, 2020

More on CA infection rates

This is Joseph

A new study is out which also suggests that the infection rate is higher than expected:
Based on results of the first round of testing, the research team estimates that approximately 4.1% of the county's adult population has antibody to the virus. Adjusting this estimate for statistical margin of error implies about 2.8% to 5.6% of the county's adult population has antibody to the virus- which translates to approximately 221,000 to 442,000 adults in the county who have had the infection. That estimate is 28 to 55 times higher than the 7,994 confirmed cases of COVID-19 reported to the county by the time of the study in early April. The number of COVID-related deaths in the county has now surpassed 600.
Methods are not yet out, but there are several points of concern. One, such small percentages are vulnerable to a lot of errors in specificity (see Andrew Gelman and Thomas Lumley).  They don't say in the press release (I cannot even find the study yet) but another press release says:
In L.A. County, researchers on April 10-14 sampled the blood of a random population of 863 area residents at six testing sites.
So we are talking maybe 30 or positives.

Two, the New York numbers make no sense if these estimates are correct, unless the disease is either wildly misclassified (in NY or LA there must be important mis-attribution of case of death) or the infection rates in NY must be enormous. Here is Megan McArdle discussing it

The obvious way to solve this is to do a 1000 person sample in NYC. If it is even close to the huge infection rates needed for these numbers then it should be quickly obvious in any study of seroprevalence.

The final point, is if the IFR is massively higher in NYC that, alone, is worth studying. They have 9,101 confirmed deaths and 4,582 probable deaths as of April 20th. There are 8.3 million people in NYC. That suggests an IFR of 0.16% assuming a 100% infection rate. The Stanford study suggested an IFR of 0.12% to 0.2% based on their models.

Figuring this out is important and not helped by being led by press releases before full scientific evaluation.


It’s not about the economy versus public health – – more on the framing of the COVID-19 debate

Lots of things are over before they start. John Houston argued that once you had the script and the cast, the movie was mostly finished before filming even began. Something similar happens with journalism. Once the decision has been made what to cover and how to frame it, you have largely decided the outcome of the debate.

The conservative movement was especially good at playing this part of the game. Think back at all of the discussions about the social safety net that were set up in terms of compassion versus fiscal responsibility. The approach was a profoundly dishonest but remarkably effective. For years, it allowed Randians like Paul Ryan to rebrand themselves as the reluctant grownups in the room.

We are seeing something similar but much more dangerous with the debate over the economic cost of social distancing. Establishment conservatives (those who acknowledge that the pandemic is real) are pushing hard to frame this as a choice between the toll the disease will take if allowed to go unchecked and the suffering that would company an economic collapse. It’s worth noting that this is not the way that most economists are likely to approach this question. There appears to be a consensus in both camps that the economic cost of inaction would probably be greater than that of reasonable containment steps. Furthermore, it's not clear how much power governments have to end social distancing.



James Pethokoukis writing for the National Interest.
The economic impact of the Swedish strategy is also unclear. The government certainly thinks it’s going to be pretty bad. According to the National Institute of Economic Research, an agency that reports to the Finance Ministry, its baseline scenario has Swedish real GDP growth declining by 3.4 percent this year, worse than its 2.9 percent forecast for the United States. It also sees a 6 percent contraction in the second quarter, comparable to the annualized US forecasts by Wall Street of a 25 percent to 30 percent contraction here from April through June. From the NIER: “Concern about infection and official advice on limiting social contact are putting a major damper on household demand, and delivery problems are disrupting production in parts of the business sector. … However, there is extreme uncertainty about future developments.” It doesn’t seem the light-touch approach provides immunity from severe economic hardship.

So where does the lives versus jobs framing come from? For a bit of political context, both Trump and the Republicans desperately need Q3 of this year to be as non-terrible as possible. 
Even a dead cat bounce, if the timing were lucky, could considerably improve the prospects of the GOP.

Sunday, April 19, 2020

Some covid-19 study thoughts

This is Joseph

This study needs context:
A seroprevalence study led by Stanford researchers estimates that the number of COVID-19 cases in Santa Clara County was 50 to 85 times higher than the number of confirmed cases by early April — meaning that the true case numbers could range from 48,000 to 81,000 people infected. The county has reported 1,870 confirmed cases as of Friday. 
Medicine professor and study co-lead Jay Bhattacharya said in a Friday press conference that the study results put coronavirus’ fatality rate “about on par with the flu,” but he warned that the lack of a vaccine means the two situations aren’t equivalent. 
And
Out of the 3,330 samples analyzed, 50 came back positive, indicating a crude prevalence rate of 1.5%. The researchers adjusted the initial results both by demographics — to account for the zip code, sex and race of study participants — and by test accuracy. The antibody test misses between 10 and 30% of those who have COVID-19 antibodies, according to Bendavid.  
The problem of course is what is the specificity of the test. The authors estimate it:
A combination of both data sources provides us with a combined sensitivity of 80.3% (95 CI 72.1-87.0%) and a specificity of 99.5% (95 CI 98.3-99.9%)
But 3330 samples would have 17 false positives at the center of the point estimate and 56 at the bottom of the interval (e.g. the entire sample size could be false positives based on this data making the true prevalence ZERO) (and 3 at the top of the interval). This sort of low prevalence population is dangerous for making conclusions.

It also makes no sense. Look at New York as of Sunday:


There are 12.6 cases per thousand and 0.93 deaths per thousand. That is already at a flu level of mortality (suggesting nearly 100% of New Yorkers are in infected, across the entire state). But that is required for this statement:
A hundred deaths out of 48,000-81,000 infections corresponds to an infection fatality rate of 0.12-0.2%.  
A 50 to 85 times under-count would mean 63% to 107% of New Yorker residents are infected. That is for the entire state, not just NYC. If Stanford researchers think this level of under-count is plausible then it should be immediately apparent with a quick NY based study.

So these rare infections require an extremely accurate specificity or else you get huge confidence intervals that make the rest of it difficult to interpret, as the scenarios in the paper don't seem to incorporate the uncertainty in the test specificity. If they do, I am surprised that they end up with confidence intervals so narrow. What we really learn is that the rate is small, and could support a lot of possible infection fatality rates. I don't know that the media quotes above are supported by the analysis in the paper, once variance is considered carefully.

Postscript: After writing this, I realized that I am late to the party, via statchat. The linked articles discuss specificity in a lot more detail, although neither use the number right of the paper.

Postscript 2: Never schedule a post for Monday. Andrew Gelman is awesome here. Go read that instead.

Postscript 3: I think here is where Bayesian perspectives are super helpful. For the NY death numbers to be even close, more than half of the city must have been infected. The Diamond Princess only had 17% infected. They also had an IFR of 0.5% (95% CI: 0.2-1.2%). Applied to NYC (about 9000 deaths), that suggests 1.8 million infections (which is about 20%) with a range of 750,000 (less than 10%) to 4.2 million (50%). NYC demographics are not the same as Santa Clara, but  the median age in NYC is 36.9 years and the median age in Santa Clara is 37.2; these are not wildly different numbers that would make NYC uniquely vulnerable.

Rhetorical Orthogonality

Repost

 I'm about to do one of those things that annoys the hell out of me when other people do it, namely taking a well-defined technical concept and trying to generalize it in order to make some big sweeping statements. So I start with apologies, but I think this goes to the heart of many of the problems we've been seeing with journalism and the public discourse (and also explains much of the difficulty that a lot of us run into when we tried to address those problems).

If we think of orthogonal data in the broad sense as something that brings in new information, it gives us a useful way of thinking about the discussion process. I'm thinking in a practical, not a theoretical sense here. Obviously a mathematical theorem does not technically bring any new information into a system, but in practical terms, it can certainly increase our knowledge. By the same token, a new argument may simply present generally known facts in a new light, but it can still increase our understanding. (You might argue at this point that I'm conflating knowledge and understanding. You'd probably be right, but, in this context, I think it's a distinction without a difference.)

My hypothesis here is that (putting aside literary considerations for the moment), good journalism should be judged mainly on the criteria of accuracy and orthogonality, with the second being, if anything, more important than the first. Instead, we often see indifference to accuracy and barely concealed hostility toward orthogonality. We do see a great deal of lip service toward diversity of opinion, but the majority of that "diversity" is distinctly non-orthogonal, falling on the same axes of the previous arguments, just going the opposite direction.

For example, imagine a disgruntled employee locked in an office with a gun. "He's willing to shoot."/"He's not willing to shoot" are nonorthogonal statements even though they contradict each other. By comparison, "he doesn't have any bullets" would be orthogonal. I'd put most of the discussion about liberal bias in the mainstream media squarely in the nonorthogonal category, along with every single column written by Bret Stephens for the New York Times.

Nonorthogonal debate has become the default mode for most journalists. What's more, they actually feel good about themselves for doing it. Whenever you have an expert say "is," you are absolutely required to find another who will say "is not." This practice has deservedly been mocked in cases where one of the arguments is far more convincing than the other (as with global warming), but even when there's some kind of rough symmetry between the positions, it is still a dangerously constrained and unproductive way of discussing a question.

Friday, April 17, 2020

Ahem

This is Joseph.

It is Tyler Cowen week on the blog.
But I ask you, where are the numerous cases of leading epidemiologists screaming bloody murder to the press, or on their blogs, or in any other manner, that the most commonly used model for this all-important policy analysis is deeply wrong and in some regards close to a fraud? 
So we did have blog material on this, here, here, and here.  But it is a fair critique that we should have been more aggressive in blogging about this issue. Andrew Gelman was ahead of the curve here, way back on March 29th:
I have a few thoughts on this model. First, yeah, it’s curve-fitting, no more and no less. Second, if they’re gonna fit a model like this, I’d recommend they just fit it in Stan: the methodological appendix has all sorts of fragile nonlinear-least-squares stuff that we don’t really need any more. Third, I guess there’s nothing wrong with doing this sort of analysis, as long as it’s clear what the assumptions are. What the method is really doing is using the second derivative of the time trend on the log scale to estimate where we are on the curve. Once that second derivative goes negative, so the exponential growth is slowing, the model takes this as evidence that the rate of growth on the log scale will rapidly continue to go toward zero and then go negative. Fourth, yeah, what Dorman says: you can’t take the model for the asymptotic limit seriously. For example, in that methodological appendix, they say that they use the probit (“ERF”) rather than the logit curve because the probit fits the data better. That’s fine, but there’s no reason to think that the functional form at the beginning of the spread of a disease will match the functional form (or, for that matter, the parameters of the curve) at later stages. It really is the tail wagging the dog.

What I did last weekend

You know you're an Angeleno when you hear about a pandemic and your first thought is "with this traffic I can make Mount Baldy in 45 minutes."


Never even had to leave the county.

And a birthday shoutout








Thursday, April 16, 2020

How they put the hype in hyperloop

Given the events of the moment, this topic seems almost quaint, but one of these days we'll be returning to our old threads, and when we do, I'll want to make a note of these examples of how this particular bullshit cycle made the piles ever deeper.

The following title popped up yesterday on a Google recommendation list:

Elon Musk's Hyperloop Idea To Become Reality In North Holland Thanks To Hardt



If you are at all familiar with the this genre of puff pieces, you know that the expectations will be down downgraded quickly, but even by those standards, the jump from the title to the first paragraph was remarkably abrupt.

"Elon Musk’s hyperloop idea might have a chance of becoming a reality in North Holland."

We can only hope an actual hyperloops braking system would work that well.



The author also apparently didn't get the memo that Musk's proposal for a high speed air-bearing system was quietly dropped years ago (Elon's only original contribution to all of the current plans was the name).

The post links to a Guardian article that also does itself proud in the hype department. Keep in mind that the insanely optimistic capacity numbers of the various proposals assume you are squeezing people in a pod and running full loads 24/7.

Now look at this artist's rendering.


They aren't even trying to be credible or consistent. Just cool pictures for gullible journalists.

Another old post we're standing by


Thursday, March 2, 2017

There will be safe seats. There are no safe seats.

In 2017, we have a perfect example of when not to use static thinking and naïve extrapolation.

Not only are things changing rapidly, but, more importantly, there are a large number of entirely plausible scenarios that would radically reshape the political landscape and would undoubtedly interact in unpredictable ways. This is not "what if the ax falls?" speculation; if anything, have gotten to the point where the probability of at least one of these cataclysmic shifts happening is greater than the probability of none. And while we can't productively speculate on exactly how things will play out, we can say that the risks fall disproportionately on the Republicans.

Somewhat paradoxically, chaos and uncertainty can make certain strategic decisions easier. Under more normal (i.e. stable) circumstances it makes sense to expend little or no resources on unwinnable fights (or, conversely,  to spend considerable time and effort deciding what's winnable). The very concept of "unwinnable," however, is based on a whole string of assumptions, many of which we cannot make under the present conditions.

The optimal strategy under the circumstances for the Democrats is to field viable candidates for, if possible, every major 2018 race. This is based on the assumption not that every seat is winnable, but that no one can, at this point, say with a high level of confidence what the winnable seats are.

Wednesday, April 15, 2020

The strange bedfellows of hydroxychloroquine

Picking up on this previous thread.

I know we've been through all of this stuff about Leo Strauss and the conservative movement before so I'm not going to drag this out into great detail except to reiterate that if you want to have a functional  institution that makes extensive use of internal misinformation, you have to make sure things move in the right direction.

With misinformation systems as with plumbing, when the flow starts going the wrong way, the results are seldom pretty. This has been a problem for the GOP for at least a few years now. A number of people in positions of authority, (particularly in the tea party wing) have bought into notions that were probably intended simply to keep the cannon-fodder happy. This may also partly explain the internal polling fiasco at the Romney campaign.

We don't know if hydroxychloroquine will turn out to have a useful role in the treatment of Covid 19, but we can say with almost absolute certainty that it's not a magic bullet. Controlled studies are incredibly helpful for answering a lot of questions, but even with observational data it's easy to spot large, simple, immediate effects. Thousands of patients have been treated with hydroxychloroquine (right now doctors are trying lots of things) and if there has been a benefit, it has been subtle.

Nonetheless, belief in the miraculous powers of the drug is widespread. Some of this comes from the people you'd expect: flakes; conspiracy theorists; and those who prey upon them.
















































Then there's the group we've talked about before, Republican politicians drinking from the wrong pipe, seeming to actually believe the lies meant for the base. One example of many (emphasis added):

And besides, the first-term Republican told reporters at a briefing this month, “South Dakota is not New York City.”

But now South Dakota is home to one of the largest single coronavirus clusters anywhere in the United States, with more than 300 workers at a giant ­pork-processing plant falling ill. With the case numbers continuing to spike, the company was forced to announce the indefinite closure of the facility Sunday, threatening the U.S. food supply.

“A shelter-in-place order is needed now. It is needed today,” said Sioux Falls Mayor Paul TenHaken, whose city is at the center of South Dakota’s outbreak and who has had to improvise with voluntary recommendations in the absence of statewide action.

But the governor continued to resist. Instead, she used a media briefing Monday to announce trials of a drug that President Trump has repeatedly touted as a potential breakthrough in the fight against the coronavirus, despite a lack of scientific evidence.

 But there's one group that, for me, drives home just how out of control the bullshit problem is on the right, billionaires.





And no party would be complete without...



Tuesday, April 14, 2020

More on Tyler Cowen and IHME

This is Joseph

EDIT: Andrew Gelman is also on the case with the IHME model. In case the stuff below isn't clear, critiquing this model is not only healthy but essential for good science. 

Tyler Cown writes:

And here is a further paper on the IMHE model, by statisticians from CTDS, Northwestern University and the University of Texas, excerpt from the opener:
In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)
Again, I am very happy to present counter evidence to these arguments.  I readily admit this is outside my area of expertise, but I have read through the paper and it is not much more than a few pages of recording numbers and comparing them to the actual outcomes (you will note the model predicts New York fairly well, and thus the predictions are of a “train wreck” nature).
This is all fair and part of what the Epidemiology community has been discussing. But the follow-up is:
So now really is the time to be asking tough questions about epidemiology, and yes, epidemiologists.  I would very gladly publish and “signal boost” the best positive response possible. 
From "About IHME":
The Institute for Health Metrics and Evaluation (IHME) is an independent population health research center at UW Medicine, part of the University of Washington, that provides rigorous and comparable measurement of the world's most important health problems and evaluates the strategies used to address them. IHME makes this information freely available so that policymakers have the evidence they need to make informed decisions about how to allocate resources to best improve population health. 
Do you see the word Epidemiology anywhere?  I think that the reason I bristle is not that we shouldn't ask hard questions about models. It is the need to ask whether weak models in a different field should cast doubt on a field of study. I dunno -- do bad IHME models make me ask hard questions about physics too?

What about the director?
Christopher J.L. Murray, MD, DPhil, is the Chair and Professor of Health Metrics Sciences and Director of the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. His career has focused on improving health for everyone worldwide by improving health evidence. A physician and health economist, his work has led to the development of a range of new methods and empirical studies to strengthen health measurement, analyze the performance of public health and medical care systems, and assess the cost-effectiveness of health technologies.
I see Health Economist but not Epidemiologist. There is a department of Epidemiology at the University of Washington and there are people in IHME affiliated with it, but it's hardly a large overlap. It's like blaming the Economics Department for economic forecasting models developed by the Business School. It's the right general disciplinary area and these people might talk, but they are not always going to agree with methodology. But there is definitely not a tight collaboration here and IHME has its own graduate programs in Health Metrics, which it is quite clear are distinct from Epidemiology. They even have their own PhD program in Health Metrics.

Now, I don't want this line of criticism to take away from the fact that the IHME model has some . . . limitations. Here is Ellie Murray (no relation, I think) quoting a statistics link on the IHME model because it was so different than what Epidemiology does. Here is Carl Bergstrom's critique of the model 9UW biology). Here are two bloggers showing discomfort with it a week ago, for many of the reasons that are coming out now.

The IHME model was a very good way of articulating exponential growth. I think Kevin Drum is doing something almost as sophisticated and his closeness to the data has him asking some good questions.  Recent papers have me asking some of the same questions and, coupled with the Iceland data (nicely noted by Korolev), is forcing this (non-epidemic modeling epidemiologist) to ask some hard questions.

That said, the first part of the post was really nice as Ivan Korolev is doing some smart discussion of SEIRD models and is the sort of contribution that matters, as the models that he is critiquing at least come from the field itself and the Iceland data set is very useful adjunct information. Maybe I need to go back and spend some time on that.

Anyway, the health field is complicated and there are a lot of overlapping issues here. I do think it is of great service to be asking hard questions about the models that guide policy. If people want some bad Epidemiology models, I am pleased to provide some examples. But in general some more precision about the field questions would be useful here.


Tuesday Tweets (now actually on Tuesday)



From Vox:

Dr. Deborah Birx, a respected physician and experienced diplomat who seemingly serves as a voice of reason in her role as coordinator of the White House’s coronavirus task force, raised a lot of eyebrows on Thursday with her effusive praise of President Donald Trump as “attentive to the scientific literature and the details” during an interview with the Christian Broadcasting Network.

“He’s been so attentive to the scientific literature and the details and the data,” Birx said. “I think his ability to analyze and integrate data that comes out of his long history in business has really been a real benefit during these discussions about medical issues.”


Neither Rain nor sleet nor snow...

This is the essential thread.







Tyler explains everything
And the punchline...

  

You should keep an eye on OAN




More pandemic news


Essential IHME thread



 
From another of those subsidized conservative "think tanks" (and yes, this does require scare quotes).

 

Meanwhile






And in closing
 

Monday, April 13, 2020

A reply to Tyler Cown's questions on Epidemiology

This is Joseph.

Tyler Cowen had a critique of epidemiological disease models. Based on the list of issues, I think he is critiquing the IHME model, which is a medicine unit led by a trained DPhil Economist (Chris Murray) and which is more of a Health Economics unit than an epidemiology one (happy to correct if Tyler wants to link to some Epidemiology models). But the big Epidemiology models are the ones from Imperial College and the critiques seem misplaced for those models. But this is, of course, a guess.

But he asked some questions and so here are some answers

a. As a class of scientists, how much are epidemiologists paid?  Is good or bad news better for their salaries?

Epidemiologists are typically paid above average for academics, because of their links to medical schools. Those in departments of public health are shamefully underpaid. Since people want good news from them, there is some pressure to produce good news and most of our scandals come from over-optimistic forecasts.

b. How smart are they?  What are their average GRE scores?

Very hard to answer as there is no undergraduate preparation. So the field contains a lot of MDs (no GRE scores) and people from a diverse set of backgrounds. I would say that the variance is high, more than anything else.

c. Are they hired into thick, liquid academic and institutional markets?  And how meritocratic are those markets?

They are often hired into soft money positions that are contingent on grant funding. I was in one such for about a decade. This is a selection process that breeds productivity, although a lot of it is in the area of grant writing.

d. What is their overall track record on predictions, whether before or during this crisis?

Very little of Epidemiology in in forecasting. I am an infectious disease epidemiologist and generally do not do epidemic forecast models. I look at treatment effectiveness.

e. On average, what is the political orientation of epidemiologists?  And compared to other academics?  Which social welfare function do they use when they make non-trivial recommendations?

Public health, as a field, tends to rely on efficient government. It is no more odd to see epidemiologists as left leaning then it would be to see a small business trade association president as right leaning. I think it was Megan McArdle who pointed out that the best model of government being effective is public health (vaccinations, public sanitation, etc . . .)

f. We know, from economics, that if you are a French economist, being a Frenchman predicts your political views better than does being an economist (there is an old MR post on this somewhere).  Is there a comparable phenomenon in epidemiology?

There are huge wars in Epidemiology but they are not driven by country so far as I can tell. It's more by epidemiological sub-field. Observational versus experimental. Causal inference versus traditional epidemiology. That sort of thing.

g. How well do they understand how to model uncertainty of forecasts, relative to say what a top econometrician would know?

In my experience, very. Look at the range of forecasts in the Imperial College models which are far greater then the IHME model. They do better than 10-fold differences in forecasts based on the response functions of the government and populace.

h. Are there “zombie epidemiologists” in the manner that Paul Krugman charges there are “zombie economists”?  If so, what do you have to do to earn that designation?  And are the zombies sometimes right, or right on some issues?  How meta-rational are those who allege zombie-ism?

Some fields of epidemiology have simply no high quality data (see nutritional epidemiology); fields with access to robust experiments tend to purge these ideas. Again, part of the problem is the variance in both people and subjects in "epidemiology" is huge, as are the tools available. Fields with experiments definitely kill off Zombie ideas, less so when it is all observational.

i. How many of them have studied Philip Tetlock’s work on forecasting?

I know of it, and tend to think that it is less applicable for disease models which tend to be more mechanistic. But epidemic curves are not my sub-field. That said we have had some incredible blunders in epidemiology (Farr's Law) when we get too mechanistic.