Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Here’s how I’m framing this. Very much looking for feedback
Think of the following groups:
1. Vote on election day
2. Vote early in person
3. Vote by mail (drop box)
4. Vote by mail a. Before Oct 25 b. After Oct 25
2., 3., and 4a. appear to favor Democrats.
We assume 1. will favor Republicans.
I’m under the impression that the GOP’s attempts to throw out votes is focused on group 4b. [Is this correct?]
Because Trump et al telegraphed their strategy months in advance, the Democrats have had time to push back hard with the message vote as early as possible and under no circumstances rely on the mail in the final week.
In order for the GOP tactic to work:
1. The election in important swing states must be fairly close.
2. A large number of voters need to have mailed their ballots in that state less than a week before the election.
3. These must be heavily Democratic.
I am dubious of 2 and 3. At this point only the lowest of low information Democrats would think that mailing a ballot at the last minute is a good idea, but I don’t have any data to back that up.
Can anyone shed some analytic light on this?
______________________________________
UPDATE:
A bit of context. There were 137 million votes cast in 2016.
Breathtaking: we're likely on track for 100 million+ votes cast *before* Election Day, nearly double the @EACgov estimate of 57.2 million advance ballots cast in 2016.
With what will very likely be more than half of this election's votes already cast, the order in which they are counted will probably determine the story Trump's advocates will be able to tell.
2/ ...may be true in places such as PA and WI where they won’t start counting those pre-Election Day votes until that day, and where GOPers disproportionately will be voting in person Election Day. There may be a “red mirage.”
I also wonder if this massive and successful push for early voting, especially among Democrats, and the related telegraphing of the Trump/USPS/SCOTUS plan to steal the election means that there won't be that many votes left to worry about arriving after Nov. 3.
People seem to be conflating "they won't count (mail) ballots that arrive after Election Day"—true in some states, not others, and currently the subject of much litigation—with "they won't count ballots after Election Day, period", which would be orders of magnitude more radical.
Finally, not as a prediction but just as an observation. Early votes are in the bank; election day votes are accounts receivable. One of the problems with encouraging your followers to wait till the last day is that you don't have a floor if that day turns ugly.
Many years ago in Paris, Arkansas, there was a small shop owned by a local family named Kafka. I never met them and have no idea what relation they had to the writer.
The shop was of a very common type in the Ozarks, folksy with more often than not a made-in-Taiwan hillbilly décor. I don’t recall ever going inside but I do remember the sign which read something like this:
Krafts Antiques Fun Stuff Kwilts Art Supplies
This memory is not all that relevant but then neither is this post. It would have been had I written it when I first intended to a month or two ago. Back then, economic indicators and forecasts probably probably had a bigger roles in the models of 538 and the Economist. I’d imagine now it’s all about weighing polls and estimating turnout. But even if I missed the timeliness window for this post, there are some less ephemeral issues I still want to hit.
On some level, all predictive modeling relies on the assumption that the important relationships and trends we’ve observed in data in the past will continue to hold in the future. We don’t talk about it all that much but this is one of those things that makes all competent statisticians at least a little worried. This is especially true when we go out of the range of our data, when the variables we put into our model start having values we’ve never seen before.
Pretty much serious election models factor in the economy and where it’s going. The actual relationship may be complicated but, at the risk of oversimplifying, an economy that’s good or trending up favors the party in power and vice versa.
But what happens when the economy is good for half the people and terrible for the rest? Many economists have described our current situation as a K-shaped recovery with white-collar knowledge workers doing fairly well while those in other sectors such as the service industry suffering horribly.
As far as I know, we haven’t had a presidential election during a K-shaped recovery, at least not since we starting scientific polling. This is outside the range of data (as is the pandemic, as is having a president openly undermining the election, as is…). This is where the art of modeling kicks in. The statisticians at 538 are smart and experienced and I have faith in their judgement.
But when you read credulous story about model confidently predicting some wildly counter-intuitive development, it is also good to remember that modeling is a mixture of science and art and some people aren’t very good at the latter.
The idea that you should incorporate a correlation matrix into an electoral model is one of those things that just makes sense across the board. It is supported by the data, it is intuitively obvious, and it is easy to justify from first principles.
But here’s the part that bothers me just a little bit. Historically, these models predicted the outcome of a collection of events that happened in different states but mostly at a single point in time simultaneously, election day.
That’s not how elections work in 2020. Different states now have wildly different cadences and rules. Most of the votes might be cast in one state before another even starts the process. Is the likelihood of a candidate outperforming the polls in the first two weeks of October in one state still strongly correlated to the probability in another state on election day? Do certain aspects of the model fare better than others under these new conditions? Would 538’s model handle these changes differently than the Economist model would?
I have absolutely no idea whether or not these are important issues. I am woefully ignorant on the subject, but it seems like an interesting topic for discussion so if anyone better informed than I (which is to say pretty much anyone reading this blog) would care to join in, I would love to hear some opinions.
The implications if we start thinking of the effect of voter suppression as a J-shaped curve.
key point from @jonathanalter below: people figure it out when others try to suppress their votes. they don’t like it and push back. it’s a big motivator https://t.co/7EYf8YfnLW
The culpability of the Hoover Institute for the current crisis.
Many people have probably heard about the Great Barrington Declaration, denouncing social distancing and calling for "herd immunity." It's popular with the White House, and it's the product of a Koch-linked think tank 1/ https://t.co/jHU62woRuc
Long time readers will know where these are going.
In Sep. 2016, I wrote my post, "Asymmetry between the major parties fries the circuits of the mainstream press." https://t.co/qhvtpWgtzC Today's "both sides see the other as the destroyer" piece published by the Washington Post is a circuits-fried work. https://t.co/HFIfDgyb8j
This piece is incredibly credulous about Trump's approach to the economy and in accepting the myth that, prior to Trump, the GOP embraced "sacred verities about government debt." https://t.co/2WRv6EUdCU
The bottom line is that even the institutionalists are coming to understand that an equilibrium in which one side does whatever they want, while the other side tries to abide by old rules is not a good equilibrium for the institution itself. https://t.co/8GLkjrL5aZ
I've been thinking a lot about the "surge ahead on election day and cut off the count strategy." Among other things, I've been thinking how stupid it was to telegraph that plan months in advance. If Trump plays Goldfinger in the remake, he'll tell Bond about his Fort Knox plan during the gin game.
indeed, not just Trump but kavanaugh too. Because the founders original intent was to end the counting process after network media desks say who won. https://t.co/J1waYHIEVh
Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. Vote as soon as you can. https://t.co/R204l2S544https://t.co/0TGnBbD5vW
The approach of quietly chipping away a reproductive rights on a state level has given moderate Republicans a lot of breathing room. That may be going away.
Pew Research found Alaska to be one of the most pro-choice states in the nation. Pew found Alaska is more pro-choice than California. https://t.co/0Co3JIYxC0
[Search for the "great unwinding."] In 2020 the GOP will still have to appease the cult members. Though unlikely, I would not rule out Don Jr. or Ivanka or even the return of Trump himself. Whoever gets it will find that holding onto the base while building a broad coalition will be... challenging.
Interesting companion piece to yesterday's post on Tesla's disturbing beta rollout of FSD. It promises that the driver has to do almost nothing. The claim turns out to be false but perhaps that's a feature, not a bug. It turns out that doing almost nothing is really hard.
People often use the phrase “in the loop” to describe how connected someone is (or is not) to a decision-making process. Fewer people know that this “control loop” has a specific name: Observe, Orient, Decide, Act (OODA). The framework was originally devised by a U.S. Air Force colonel, and being “in” and “out” of the OODA loop have straightforward meanings. But as automation becomes more prevalent in everyday life, an understanding of how humans behave in an in-between state—known as “on the loop”—will become more important.
Missy Cummings, a former Navy fighter pilot and director of Duke University’s Humans and Autonomy Laboratory, defines “on the loop” as human supervisory control: "intermittent human operator interaction with a remote, automated system in order to manage a controlled process or task environment.” Air traffic controllers, for example, are on the loop of the commercial planes flying in their airspace. And thanks to increasingly sophisticated cockpit automation, most of the pilots are, too.
Tesla compares Autopilot with this kind of on-the-loop aviation, saying it “functions like the systems that airplane pilots use when conditions are clear.” But there’s a problem with that comparison, Casner says: “An airplane is eight miles high in the sky.” If anything goes wrong, a pilot usually has multiple minutes—not to mention emergency checklists, precharted hazards and the help of the crew—in which to transition back in the loop of control...
Automobile drivers, for obvious reasons, often have much less time to react. “When something pops up in front of your car, you have one second,” Casner says. “You think of a Top Gun pilot needing to have lightning-fast reflexes? Well, an ordinary driver needs to be even faster.”
...
But NASA has been down this road before, too. In studies of highly automated cockpits, NASA researchers documented a peculiar psychological pattern: The more foolproof the automation’s performance becomes, the harder it is for an on-the-loop supervisor to monitor it. “What we heard from pilots is that they had trouble following along [with the automation],” Casner says. “If you’re sitting there watching the system and it’s doing great, it’s very tiring.” In fact, it’s extremely difficult for humans to accurately monitor a repetitive process for long periods of time. This so-called “vigilance decrement” was first identified and measured in 1948 by psychologist Robert Mackworth, who asked British radar operators to spend two hours watching for errors in the sweep of a rigged analog clock. Mackworth found that the radar operators’ accuracy plummeted after 30 minutes; more recent versions of the experiment have documented similar vigilance decrements after just 15 minutes.
One of the foundational beliefs of 21st century tech narratives is that the biggest obstacles to innovation are legal. If not for lawyers and regulators, we'd living in a techno-paradise.
With the possible and very qualified exception of pharmaceuticals, this has always been unfounded and in many cases, is the opposite of true. Companies like Tesla have long treated legal checks as figurative and often literal jokes, manipulating markets, misrepresenting products and in some cases, threatening public safety.
What do you call it when a company does something that every other company in remotely adjacent spaces says is dangerous, the evidence of the danger accumulates and then finally the inevitable happens and someone dies?
What's the word for such a total collapse of responsibility?
"Tesla's "deceptive" use of the term "full self-driving" to describe driver-assistance technology will likely lead to more crashes and deaths," @axios took the words right out of our mouth, literally.https://t.co/NCKVvhOBAd
Elon felt forced to release the "Full Self Driving" beta because @Waymo just deployed its Level 4 robotaxis. But releasing his beta on the public only confirms just how far behind #Tesla really is. Elon suffers this dilemma because he has to maintain a #fraud. $TSLA$TSLAQ
6. Again, this "Full Self Driving" #Tesla swerved into a turning lane & required driver intervention.
Tesla's "influencers"/cultists are attacking this account and are trying to ban me from Twitter. They want to silence critics at the expense of public safety. $TSLA$TSLAQ 6/7 pic.twitter.com/OEam2AKWzP
James Randi, a magician who later challenged spoon benders, mind readers and faith healers with such voracity that he became regarded as the country’s foremost skeptic, has died, his foundation announced. He was 92.
The James Randi Educational Foundation confirmed his death, saying that its founder succumbed to “age-related causes” on Tuesday.
...
On a 1972 episode of “The Tonight Show,” he helped Johnny Carson set up Uri Geller, the Israeli performer who claimed to bend spoons with his mind. Randi ensured the spoons and other props were kept from Geller’s hands until showtime to prevent any tampering.
The result was an agonizing 22 minutes in which Geller was unable to perform his tricks.
For Randi, those 22 minutes of magic tricks not being done would ironically become the high point of the magician's biography but there was one more twist in the story
Adam Higginbotham writing for the New York Times Magazine in 2014.
“I sat there for 22 minutes, humiliated,” Geller told me, when I spoke to him in September. “I went back to my hotel, devastated. I was about to pack up the next day and go back to Tel Aviv. I thought, That’s it — I’m destroyed.” But to Geller’s astonishment, he was immediately booked on “The Merv Griffin Show.” He was on his way to becoming a paranormal superstar. “That Johnny Carson show made Uri Geller,” Geller said. To an enthusiastically trusting public, his failure only made his gifts seem more real: If he were performing magic tricks, they would surely work every time.
It's a great tale except that there's little reason to believe it actually happened that way. Start with the fact that Geller seems to be the main source, which should have raised some red flags for Higginbotham.
How about the appearance on the Merv Griffith Show? Wasn't he invited shortly after the Carson debacle? Not exactly. He was invited back.
From IMDB:
The Merv Griffin Show (1962–1986) Alfred Drake, Pamela Mason, Uri Geller, Captain Edgar Mitchell TV-PG | 1h | Comedy, Family, Music | Episode aired 19 July 1973
The Tonight Show Starring Johnny Carson (1962–1992) Ricardo Montalban/Eskimo-Indian Olympians/Uri Geller 1h 45min | Comedy, Talk-Show | Episode aired 1 August 1973
The Merv Griffin Show (1962–1986) Eartha Kitt, Richard Dawson, Michelle Phillips, Uri Geller TV-PG | 1h | Comedy, Family, Music | Episode aired 15 August 1973
Geller's telling makes it sound like it was the Carson appearance that got him on Griffin, but he was a returning guest and there's no reason to believe he wasn't invited back simply because he had done well a couple of weeks earlier.
[Late Edit: He'd also made appearances on Jack Parr and Mike Douglas before doing Carson -- MP]
Nor is there evidence that Geller's career took off in late 1973.
If anything, it looks like Randi's debunking of Geller starting with the Tonight Show and culminating with 1975's The Magic of Uri Geller was what brought the charlatan down.
Journalists love people-are-stupid narratives, but, while I believe cognitive dissonance is real, I think the lesson here is not "To an enthusiastically trusting public, his failure only made his gifts seem more real" and is instead that we should all be more skeptical of simplistic and overused pop psychology.
When rich, successful people raise huge amounts of money for undeniably stupid business ideas, there is a natural tendency to assume that it can't be as bad as it sounds. Fight that impulse and remind yourself that the PR people who fed the journalist the story you just read went to great pains to put the best possible spin on the enterprise.
Katzenberg and Whitman told investors Wednesday afternoon that Quibi, which raised $1.75 billion to tackle the growing digital video market, will wind down after failing to attract viewers willing to pay to watch its shows, according to people familiar with the matter who were not authorized to comment.
Hollywood-based Quibi, which employed 265 people as of April, plans to use its remaining cash of about $350 million to pay back investors, sources said, a stunning turn of events for a company that promised to transform the entertainment industry.
Quibi said in a statement Wednesday it will close down the business and begin the process of selling off assets over the coming months.
To coin a phrase, if the masses can't handle the truth and need instead
to be fed a version crafted by the elite to keep the people happy and
doing what's best for them, the public's access to accurate, objective
information has to be tightly controlled.
A lot has changed since then. What was a tightly controlled narrative has taken on a life of its own and the delegitimizing of the of science and trustworthy data sources has been fully internalized, and not only by the GOP's cannon fodder.
The most important aspect of Randianism as currently practiced is the
lies its adherents tell themselves. "When you're successful, it's
because other people are inferior to you." "When you fail, it's because
inferior people persecute you (call it going Roark)." "One of these days
you're going to run away and everyone who's been mean to you will be
sorry."
The most important aspect of Straussianism as currently practiced is the
lies its adherents tell others. Having started from the assumption that
traditional democracy can't work because most people aren't smart
enough to handle the role of voter, the Straussians conclude that
superior minds must, for the good of society, lie to and manipulate the
masses.
Joseph and I have an ongoing argument about which school is worse, a
question greatly complicated by the compatibility of the two systems and
the overlap of believers and their tactics and objectives. Joseph
generally argues that Rand is worse (without, of course, defending
Strauss) while I generally take the opposite position.
This week brought news that I think bolsters my case (though I suspect
Joseph could easily turn it around to support his): one of the logical
consequences of assuming typical voters can't evaluate information on
their own is that data sources that are recognized as reliable are a
threat to society. They can't be spun and they encourage people to make
their own decisions.
To coin a phrase, if the masses can't handle the truth and need instead
to be fed a version crafted by the elite to keep the people happy and
doing what's best for them, the public's access to accurate, objective
information has to be tightly controlled. With that in mind, consider
the following from Jared Bernstein:
[D]ue to pressure from Republicans, the Congressional Research Service
is withdrawing a report that showed the lack of correlation between high
end tax cuts and economic growth.
The study, by economist Tom Hungerford, is of high quality, and is one
I’ve cited here at OTE. Its findings are fairly common in the economics
literature and the concerns raised by that noted econometrician Mitch
McConnell are trumped up and bogus. He and his colleagues don’t like
the findings because they strike at the supply-side arguments that they
hold so dear.
And with Sandy still on everyone's mind, here's something from Menzie Chinn:
NOAA's programs are in function 300, Natural Resources and Environment,
along with the U.S. Geological Survey (USGS) and a range of conservation
and natural resources programs. In the near term, function 300 would be
14.6 percent lower in 2014 in the Ryan budget according to the
Washington Post. It quotes David Kendall of The Third Way as warning
about the potential impact on weather forecasting: "'Our weather
forecasts would be only half as accurate for four to eight years until
another polar satellite is launched,' estimates Kendall. 'For many
people planning a weekend outdoors, they may have to wait until Thursday
for a forecast as accurate as one they now get on Monday. … Perhaps
most affected would be hurricane response. Governors and mayors would
have to order evacuations for areas twice as large or wait twice as long
for an accurate forecast.'"
Apparently, Jack Welch, former chairman and CEO of General Electric, is
accusing the Bureau of Labor Statistics of manipulating the jobs report
to help President Obama. Others seem to be adding their voices to this
slanderous lie. It is simply outrageous to make such a claim and echoes
the worrying general distrust of facts that seems to have swept segments
of our nation. The BLS employment report draws on two surveys, one (the
establishment survey) of 141,000 businesses and government agencies and
the other (the household survey) of 60,000 households. The household
survey is done by the Census Bureau on behalf of BLS. It’s important to
note that large single-month divergences between the employment numbers
in these two surveys (like the divergence in September) are just not
that rare. EPI’s Elise Gould has a great paper on the differences
between these two surveys.
BLS is a highly professional agency with dozens of people involved in
the tabulation and analysis of these data. The idea that the data are
manipulated is just completely implausible. Moreover, the data trends
reported are clearly in line with previous monthly reports and other
economic indicators (such as GDP). The key result was the 114,000
increase in payroll employment from the establishment survey, which was
right in line with what forecasters were expecting. This was a positive
growth in jobs but roughly the amount to absorb a growing labor force
and maintain a stable, not falling, unemployment rate. If someone wanted
to help the president, they should have doubled the job growth the
report showed. The household survey was much more positive, showing
unemployment falling from 8.1 percent to 7.8 percent. These numbers are
more volatile month to month and it wouldn’t be surprising to see
unemployment rise a bit next month. Nevertheless, there’s nothing
implausible about the reported data. The household survey has shown
greater job growth in the recovery than the establishment survey
throughout the recovery. The labor force participation rate (the share
of adults who are working or unemployed) increased to 63.6 percent,
which is an improvement from the prior month but still below the 63.7
percent reported for July. All in all, there was nothing particularly
strange about this month’s jobs reports—and certainly nothing to spur
accusations of outright fraud.
We can also put many of the attacks against Nate Silver in this category.
Going back a few months, we had this from Businessweek:
The House Committee on Appropriations recently proposed cutting the
Census budget to $878 million, $10 million below its current budget and
$91 million less than the bureau’s request for the next fiscal year.
Included in the committee number is a $20 million cut in funding for
this year’s Economic Census, considered the foundation of U.S. economic
statistics.
And Bruce Bartlett had a whole set of examples involving Newt Gingrich:
On Nov. 21, Newt Gingrich, who is leading the race for the Republican
presidential nomination in some polls, attacked the Congressional Budget
Office. In a speech in New Hampshire, Mr. Gingrich said the C.B.O. "is a
reactionary socialist institution which does not believe in economic
growth, does not believe in innovation and does not believe in data that
it has not internally generated."
Mr. Gingrich's charge is complete nonsense. The former C.B.O. director
Douglas Holtz-Eakin, now a Republican policy adviser, labeled the
description "ludicrous." Most policy analysts from both sides of the
aisle would say the C.B.O. is one of the very few analytical
institutions left in government that one can trust implicitly.
It's precisely its deep reservoir of respect that makes Mr. Gingrich
hate the C.B.O., because it has long stood in the way of allowing
Republicans to make up numbers to justify whatever they feel like doing.
...
Mr. Gingrich has long had special ire for the C.B.O. because it has
consistently thrown cold water on his pet health schemes, from which he
enriched himself after being forced out as speaker of the House in 1998.
In 2005, he wrote an op-ed article in The Washington Times berating the
C.B.O., then under the direction of Mr. Holtz-Eakin, saying it had
improperly scored some Gingrich-backed proposals. At a debate on Nov. 5,
Mr. Gingrich said, "If you are serious about real health reform, you
must abolish the Congressional Budget Office because it lies."
...
Because Mr. Gingrich does know more than most politicians, the main
obstacles to his grandiose schemes have always been Congress's
professional staff members, many among the leading authorities anywhere
in their areas of expertise.
To remove this obstacle, Mr. Gingrich did everything in his power to
dismantle Congressional institutions that employed people with the
knowledge, training and experience to know a harebrained idea when they
saw it. When he became speaker in 1995, Mr. Gingrich moved quickly to
slash the budgets and staff of the House committees, which employed
thousands of professionals with long and deep institutional memories.
Of course, when party control in Congress changes, many of those
employed by the previous majority party expect to lose their jobs. But
the Democratic committee staff members that Mr. Gingrich fired in 1995
weren't replaced by Republicans. In essence, the positions were simply
abolished, permanently crippling the committee system and depriving
members of Congress of competent and informed advice on issues that they
are responsible for overseeing.
Mr. Gingrich sold his committee-neutering as a money-saving measure. How
could Congress cut the budgets of federal agencies if it wasn't willing
to cut its own budget, he asked. In the heady days of the first
Republican House since 1954, Mr. Gingrich pretty much got whatever he
asked for.
In addition to decimating committee budgets, he also abolished two
really useful Congressional agencies, the Office of Technology
Assessment and the Advisory Commission on Intergovernmental Relations.
The former brought high-level scientific expertise to bear on
legislative issues and the latter gave state and local governments an
important voice in Congressional deliberations.
The amount of money involved was trivial even in terms of Congress's
budget. Mr. Gingrich's real purpose was to centralize power in the
speaker's office, which was staffed with young right-wing zealots who
followed his orders without question. Lacking the staff resources to
challenge Mr. Gingrich, the committees could offer no resistance and his
agenda was simply rubber-stamped.
Unfortunately, Gingrichism lives on. Republican Congressional leaders
continually criticize every Congressional agency that stands in their
way. In addition to the C.B.O., one often hears attacks on the
Congressional Research Service, the Joint Committee on Taxation and the
Government Accountability Office.
Lately, the G.A.O. has been the prime target. Appropriators are cutting
its budget by $42 million, forcing furloughs and cutbacks in
investigations that identify billions of dollars in savings yearly. So
misguided is this effort that Senator Tom Coburn, Republican of Oklahoma
and one of the most conservative members of Congress, came to the
agency's defense.
In a report issued by his office on Nov. 16, Senator Coburn pointed out
that the G.A.O.'s budget has been cut by 13 percent in real terms since
1992 and its work force reduced by 40 percent -- more than 2,000 people.
By contrast, Congress's budget has risen at twice the rate of inflation
and nearly doubled to $2.3 billion from $1.2 billion over the last
decade.
Mr. Coburn's report is replete with examples of budget savings
recommended by G.A.O. He estimated that cutting its budget would add
$3.3 billion a year to government waste, fraud, abuse and inefficiency
that will go unidentified.
For good measure, Mr. Coburn included a chapter in his report on how
Congressional committees have fallen down in their responsibility to
exercise oversight. The number of hearings has fallen sharply in both
the House and Senate. Since the beginning of the Gingrich era, they have
fallen almost in half, with the biggest decline coming in the 104th
Congress (1995-96), his first as speaker.
In short, Mr. Gingrich's unprovoked attack on the C.B.O. is part of a
pattern. He disdains the expertise of anyone other than himself and is
willing to undercut any institution that stands in his way.
Unfortunately, we are still living with the consequences of his foolish
actions as speaker.
We could really use the Office of Technology Assessment at a time when
Congress desperately needs scientific expertise on a variety of issues
in involving health, energy, climate change, homeland security and many
others. And given the enormous stress suffered by state and local
governments as they are forced by Washington to do more with less, an
organization like the Advisory Commission on Intergovernmental Relations
would be invaluable.
I realized that I haven't done any Halloween posts this year so here are some outstanding chillers from the Golden Age of Radio.
Starring William Conrad, William Conrad, and William Conrad. Narrated by William Conrad.
And a must listen for Vincent Price fans.
John Houseman once said that Orson Welles convinced himself that he had written everything he appeared in with the possible exception of the works of Shakespeare. Lucille Fletcher would probably agree.
History buffs will want to check out the War Bonds pitch at the end.
And perhaps Lucille Fletcher's high point (featuring America's best actor according to Welles).
“I think it’s also … this newfound freedom for many people that now can work from anywhere,” Schuler said. “They’re picking up and moving wherever they want to, for the first time ever, where they’ve had this ability to do so.”
In the Marketplace-Edison Research Poll, nearly half of people who had moved or thought about moving said the ability to work from anywhere was a factor.
I don't have numbers but I'm reasonably certain this is the largest
number of Americans working from home since the advent of the internet
and the smart phone.
There's no good technological reason why most knowledge workers need to
live within a hundred or even a thousand miles of where they work. The
obstacles are cultural but they are still formidable. Despite a tight
job market and a growing housing crisis centered around a handful of
overcrowded and overpriced cities, employers have been slow to consider
alternative models.
Now new models are being forced upon everybody. New things will be
tried. Adaptations will be made. Bugs will be worked out. Attitudes will
shift.
Fifty years from now, this might be what Covid-19 is remembered for.
Smart people, like statisticians' models, are often most interesting
when they are wrong. There is no better example of this than Arthur C
Clarke's 1964 predictions about the demise of the urban age, where he
suggested that what we would now call telecommuting would end the need
for people to congregate around centers of employment and would
therefore mean the end of cities.
What about the city of the day after tomorrow? Say, the year 2000. I
think it will be completely different. In fact, it may not even exist at
all. Oh, I'm not thinking about the atom bomb and the next Stone Age;
I'm thinking about the incredible breakthrough which has been made
possible by developments in communications, particularly the transistor
and above all the communications satellite. These things will make
possible a world where we can be in instant contact with each other
wherever we may be, where we can contact our friends anywhere on earth
even if we don't know their actual physical location. It will be
possible in that age, perhaps only 50 years from now, for a man to
conduct his business from Tahiti or Bali just as well as he could from
London. In fact, if it proved worthwhile, almost any executive skill,
any administrative skill, even any physical skill could be made
independent of distance. I am perfectly serious when I suggest that
someday we may have rain surgeons in Edinburgh operating on patients in
New Zealand. When that time comes, the whole world will have shrunk to a
point and the traditional role of a city as the meeting place for man
will have ceased to make any sense. In fact, men will no longer commute;
they will communicate. They won't have to travel for business anymore;
they'll only travel for pleasure. I only hope that, when that day comes
and the city is abolished, the whole world isn't turned into one giant
suburb.
Clarke was working with a 20 to 50 year timeframe, so it's fair to say
that he got this one wrong. The question is why. Both as a fiction
writer and a serious futurist, the man was remarkably and famously
prescient about telecommunications and its impact on society. Even here,
he got many of the details right while still being dead wrong on the
conclusion.
What went wrong? Part of this unquestionably has to do with the nature
of modern work. Clarke probably envisioned a more automated workplace in
the 21st century, one where stocking shelves and cleaning floors and,
yes, driving vehicles would be done entirely by machines. He likely also
underestimated the intrinsic appeal of cities.
But I think a third factor may well have been bigger than either of
those two. The early 60s was an anxious but optimistic time. The sense
was that if we didn't destroy ourselves, we were on the verge of great
things. The 60s was also the last time that there was anything
approaching a balance of power between workers and employers.
This was particularly true with mental work. At least in part because of
the space race, companies like Texas Instruments were eager to find
smart capable people. As a result, employers were extremely flexible
about qualifications (a humanities PhD could actually get you a job) and
they were willing to make concessions to attract and keep talented
workers.
Telecommuting (as compared to off shoring, a distinction will need to
get into in a later post) offers almost all of its advantages to the
worker. The only benefit to the employer is the ability to land an
otherwise unavailable prospect. From the perspective of 1964, that would
have seemed like a good trade, but those days are long past.
For the past 40 or so years, employers have worked under (and now
completely internalized) the assumption that they could pick and choose.
When most companies post jobs, they are looking for someone who either
has the exact academic background required, or preferably, someone who
is currently doing almost the same job for a completely satisfied
employer and yet is willing to leave for roughly the same pay.
When you hear complaints about "not being able to find qualified
workers," it is essential to keep in mind this modern standard for
"qualified." 50 or 60 years ago it meant someone who was capable of
doing the work with a bit of training. Now it means someone who can walk
in the door, sit down at the desk, and immediately start working. (Not
to say that new employees will actually be doing productive work from
day one. They'll be sitting in their cubicles trying to look busy for
the first two or three weeks while IT and HR get things set up, but
that's another story.)
Arthur C Clarke was writing in an optimistic age where workers were on
an almost equal footing with management. If the year 2000 had looked
like the year 1964, he just might have gotten this one right.
The US Supreme Court has been a source of odd thinking, lately
Josh Marshall has a post about how a reader suspects that John Roberts might decide to retire from the supreme court in order to preserve institutional legitimacy. I think this shows just how deranged the thinking about the supreme court has come. Is there anything that makes people think that judges voluntarily give up power?
He is 65, way younger than Sandra Day O'Connor (around 78) or Anthony Kennedy (around 82). It just makes no sense that he'd give up influence on the court and the ability to advocate for legitimacy unless there was an external reason.
Similarly, there is an odd sense of ownership of Supreme Court seats. People seem to think we should honor Ruth Bader Ginsberg's wish that her seat not be filled until after the election. There are a ton of very good reasons to not fill the seat quickly: recent past precedent (Scalia's replacement), lack of time to do a thorough vetting, and an ongoing pandemic creating the need for focusing on helping economic and medical victims all come immediately to mind. The idea that the seat is a feudal inheritance that the current occupant has any control over how it is filled in the future is not a good thing for the health of the republic.
Finally, the fact that covering pre-existing conditions is coming back up to the court, one more time, suggests that judicial rulings are becoming deeply unserious. The US has an insurance system that is based on churn as people change jobs many times over their careers, often without a whole lot of choice. Prices are opaque and systematically higher for people without insurance (the system is not set up for negotiation). Covering pre-existing conditions is simply a prerequisite for having the system function at all. That a seat flipping could credibly threaten this act, twelve years after it passed and after repeated judicial review, is . . . well, hard to credit is the nice way to say this.
Another easy litmus test. As a Canadian, I cannot name a single Supreme Court justice. I kind of remember the chief justice if I really think about it. I can name every single US justice, and give a quick summary of their politics. I worry that the politicization of the court will end in tears.
Another sign of the process being broken is noted here.
Amy Barrett is a former law professor and sitting judge. She does not have any idea about whether Medicare and social security are constitutional? Not hedging and saying that she is unaware of any grounds on which they could be. What is the point of the hearing if the justice is completely ignorant (or pretends to be ignorant) on law and policy? Not even to comment on the grounds on which these laws are currently based? What did she teach students? There are laws and nobody can tell if they are compatible with the constitution nor have I read any precedent?
You might call this the flip-side of our Trash, Art and the Comics post. There we talked about gaining insights into a period through its pop culture. Today's post is about how modern audiences often miss the point older works because they don't catch the obvious.
Though largely lost on today's readers, for those picking up the book in the 1890s, the contrast between Dracula's Transylvania and modern day London was obvious and explicit.
In the the 1890s, people quite rightly thought of themselves as living at the height of a period of unprecedented scientific and technological breakthroughs. (I'd argue still unprecedented, but that's a topic for another post.) The characters in Dracula are well aware of the incongruity of a vampire roaming a contemporary metropolis. They discuss it at length through the book. Their leader, Dr. Van Helsing, explicitly frames the conflict as modern science versus an ancient supernatural force.
Needless to say, most of this subtext never makes it to the adaptations, but even for readers of the original, I suspect most are likely to miss the point because we tend to underestimate grossly the period's sophisticated awareness of how science and technology was changing their world.
Not really up on total factor productivity (assuming that's what they're talking about) but this can't be right.
Really mixed feelings about a metric that's supposed to be driven by technological innovation that peaks in the late thirties rather than the post-war era.
Meet the scutoid, a geometrical shape new to math, but not to nature — is the form that a group of cells in the body takes in order to pack tightly and efficiently into the tricky curves of organs https://t.co/K17sVjlGi2pic.twitter.com/rbatgVLdvP