Thursday, March 20, 2025

Perhaps we should let Turing off the hook as well

A little while back, we came to the defense of Arthur Conan Doyle by pointing out that surprisingly many intellectuals and scientists including those of the caliber of Albert Einstein and Pierre Curie, believed in or were at least open to the possibility of the paranormal back in the late 19th and early 20th centuries, and that these beliefs were far more defensible 120 years ago than they are today.

Curie had company. William Crookes and fellow Nobel laureate Charles Richet were two of the many scientists who devoted a large part of the time to paranormal research. Albert Einstein wrote the preface to Mental Radio. Respected publications published serious discussions about psychic and supernatural phenomena. Not everyone was a believer, but it was rare to find people in the "absolutely, positively no-way" camp either.

Nor were the Curies the only ones to make a connection between the fantastic scientific discoveries of the day and the possibility of the paranormal. When Edison stumbled on the first radio transmitter back in the 1870s (that’s another story we need to get into), he wondered if it might explain telepathy. When the later debunked N-rays were announced, scientists speculated that they too might explain psychic phenomena.

When you are constantly bombarded with news of the seemingly magical—be it cylinders that capture voices, strange rays that let doctors see through flesh, or communications that somehow travel instantly through the air—it would probably seem foolhardy to prematurely dismiss the possibility of something outside of our range of knowledge.

We, on the other hand, have two large bodies of knowledge to draw on when evaluating claims of the supernatural. First, we have well over a century of paranormal research where, when you take out the frauds and the fatally flawed test designs, you find no support whatsoever for these claims. Second, we have a century of research into physics that has effectively ruled out any possibility of these forces causing ghosts to walk or psychics to tell you anything that couldn't be learned from a cold reading.


This still didn't explain why someone would still be making these claims in the middle of the last century: 

 ‘I assume that the reader is familiar with the idea of extra-sensory perception … telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas … Unfortunately the statistical evidence, at least for telepathy, is overwhelming … Once one has accepted them it does not seem a very big step to believe in ghosts and bogies.’


These words weren’t published in the pages of an obscure occult journal or declared at a secret parapsychology conference. They weren’t written by a Victorian spiritualist or a séance attendee. In fact, their author is Alan Turing, the father of computer science, and they appear in his seminal paper ‘Computing Machinery and Intelligence’ (1950), which describes the ‘imitation game’ (more commonly known as the ‘Turing test’) designed to establish whether a machine’s intelligence could be distinguished from that of a human.

The paper starts by setting up the now-famous thought experiment: a human, a machine, and an observer who asks questions. If the observer cannot work out which one is which based on their responses, the machine has passed the test: its intelligence is indistinguishable from that of a human mind. The vast majority of the paper addresses various objections against the experiment from mathematics, philosophy of mind, or from those sceptical about the power of computers.

But, about two-thirds of the way through the paper, Turing addresses an unexpected worry that might disrupt the imitation game: telepathy. If the human and the observer could communicate telepathically (which the machine supposedly could not do), then the test would fail. ‘This argument is to my mind quite a strong one,’ says Turing. In the end, he suggests that, for the test to work properly, the experiment must take place in a ‘telepathy-proof room’.

Why did Turing feel the need to talk about telepathy? Why did he consider extrasensory perception a serious objection to his thought experiment? And what about his peculiar mention of ghosts?

Matyáš Moravec, a lecturer in philosophy at Queen’s University Belfast, explains that Turing's beliefs may have been questionable at the time, they weren't all that unconventional in the mid-century halls of Cambridge. 

 In 1882, a group of scholars associated with Trinity College, Cambridge, founded the Society for Psychical Research, a learned society whose aim was to study these phenomena with scientific rigour. Over the course of its existence, the society counted among its members some of the brightest minds of the time, including the author Arthur Conan Doyle and the physicist J J Thomson. Its members displayed various levels of commitment to the phenomena of the séance room. Some were very committed, such as the physicist Oliver Lodge, who wrote a highly influential book containing records of séances where he communicated with his son who was killed in action during the First World War. Some were much more sceptical, like Eleanor Sidgwick, a physics researcher and the principal of Newnham College, Cambridge, or John Venn, the inventor of the Venn diagram. Despite the differences in their level of commitment to the reality of paranormal occurrences, this heterogeneous group of scholars agreed that these phenomena deserved scholarly attention.

A high number of professional philosophers either joined the society or engaged with its findings. The ethicist Henry Sidgwick (Eleanor’s husband) was one of its founders, together with F W H Myers, the inventor of the word ‘telepathy’ and an unusually committed member: according to at least one report, he continued his involvement after his death, sending messages from the beyond via various mediums (a sort of extreme version of the professor emeritus who occasionally drops by the department). Such esteemed philosophers as Henri Bergson, William James and F C S Schiller were each elected president of the society. Many other philosophers, such as May Sinclair, were regular members.

...

Serialism begins with the premise that anything that moves must move in time. For example, a car moving on a road is in one place at t1, and at another place further up the road at t2. If there was no time, it would be hard to describe the car as ‘moving’ at all. But, Dunne argued, time itself moves. It would be strange to say that time does not move. It cannot stand still. (The British philosopher Antony Flew noted that Dunne does not distinguish straightforward spatial motion from the elusive ‘flow’ of time, which is the main blunder that eventually leads to his bizarre theory of time.) So, Dunne thought, there must be a meta-time or hypertime, a higher series of ‘time above time’ describing the movement of the time we inhabit. But this ‘time above time’ must also move (otherwise it would not be time). So, there must be a third series of time, and so on to infinity. Dunne believed that, when we dream, our minds can gain access to these higher series of times, some of which contain future events.

The book became a bestseller. It was republished in several editions and influenced several key 20th-century writers, including Jorge Luis Borges, C S Lewis, J B Priestley and J R R Tolkien. The physicist Arthur Eddington wrote an approving letter to Dunne, which was published in later editions of the book. The wider public became fascinated with the idea of ‘seeing the future’, to the point where the phrase ‘Dunne dream’ became a shorthand for any precognitive dream.

By contrast, the book was ridiculed in academic circles. Not so much because Dunne believed in precognitive dreams (as a matter of fact, many academics thought that the catalogue he provided of these dreams was the only valuable thing about his book [This strikes me as particularly significant. Lots of intellectuals accepted the idea that precognition was real of needed to be studied; they were just skeptical of that particular explanation. -- MP]), but because of his bizarre theory of time. One reviewer in The Journal of Philosophy said that it was very hard to take it seriously, another in Nature considered the possibility that the book was just a practical joke, and an unimpressed philosopher called serialism a ‘logical extravaganza’.

Broad was the only professional philosopher to seriously engage with Dunne. Trinity College holds Broad’s own copies of Dunne’s books, whose margins are covered in extensive notes containing suggestions for fixing serialism and using it to generate a philosophically robust explanation of precognitions. He even published a stand-alone paper on Dunne in the journal Philosophy in 1935, and Dunne features in several places of his magnum opus, Examination of McTaggart’s Philosophy (1933-8).

Broad addressed what might be called the ontological problem with precognitions. In the first half of the 20th century, one of the most widely accepted views of time was the ‘growing block’ theory, which says that the past exists, the present exists, but the future does not. This accords pretty well with our everyday intuitions about time, but poses a particular philosophical problem for the existence of precognitions. Say I foresee, in a dream, what happens next Saturday. Supposing the growing-block theory is true, next Saturday does not (yet) exist. So, what was it that I saw in the dream? How could I have seen something that does not exist?

The engagement with Dunne motivated Broad to propose one of the first ‘hypertime’ theories. This is a group of theories that claim that time has two (or more) dimensions, similar to the way that the space we inhabit is three-dimensional. In Broad’s view, one dimension had a growing-block structure. In this dimension, the future does not exist. But, he suggested, there could also be a second dimension of time in which the future does exist, and so contains the foreseen event. In a precognitive dream, we gain access to it. This is a streamlining of Dunne’s logical extravaganza: Broad got rid of the idea that time itself must move through time, and ditched the ‘infinite’ series of dimensions. Two dimensions are enough.

 

 

 

 

 

 

Wednesday, March 19, 2025

Noah may be overthinking "We'll do the Flintstones but in the future, with robots instead of dinosaurs."

I wrote this about a year ago but it somehow disappeared into the draft folder shortly after Noah Smith wrote this post. Fortunately, though, critiques of techno-optimism have remained as relevant as ever (and dovetails nicely with last Friday's post).

This is why The Jetsons focused on robots doing housework; they knew that this would be one of the technological innovations that would have the most immediate and unambiguously positive impact on Americans’ quality of life — especially women, who at that time were even more disproportionately burdened with housework than they are now. Housework is a job we want the robots to take.

 

In the fifties and sixties, ABC was a perennial third out of three, so it was open to ideas like primetime cartoons. This paid off with the Flintstones, a big hit by the network's standards, so an internal rip-off was inevitable. 

Here's what we said about it in a previous post:

When people talk about the Jetsons having one season, they are talking about its original prime time run. ABC in the 50s and 60s was more or less in the same position as Fox in the '90s, a perennial last place and a bit of a joke. This was partially due to the network's origin. It was carved off from NBC as the result of an antitrust action from the government against NBC. Like Fox, ABC tried a lot of out of the box programming including prime time animation. They had a moderate hit with The Flintstones which ran for a number of years. The Jetsons was an attempt to cash in on what they hoped would be a trend. (Jonny Quest also had its initial run in prime time on the network.) By the late Sixties, no one in the target audience had any idea that any of these shows had ever been anything but Saturday morning cartoons.

In addition to being inspired by The Flintstones which was itself a rip-off of The Honeymooners, The Jetsons lifted most of their premise and many of their gags from the still well remembered at the time movie series Blondie, even going so far as to cast the same lead actress.

Lifting characters and premises from other people's intellectual property was a bit of a Hanna-Barbera specialty. While it was common practice for the all cartoon studios to toss in celebrity caricatures and other references/homages, (see Andrew Gelman's class Foghorn Leghorn), HB took things to an extreme, seldom producing anything that wasn't at least partially lifted from familiar pop culture. For example, Scooby-Doo was a mashup of Dobie Gillis (particularly Bob Denver's character Maynard G, Krebs) and the fake haunted house genre. (Shaggy is also class Foghorn Leghorn).


If you're looking for where a Hanna-Barbera concept came from, you should always start (and probably stop) with the familiar pop culture of the time. In addition to the Flintstones, that included wise-cracking housekeepers. Hazel, which had debuted the year before, was the fourth highest rated show in the country. The studio's decision to include a robot maid is not really a head-scratcher.

 I know this all seems rather trivial given the news recently, particularly since Noah Smith is a sharp guy, and we're going to be picking on him quite a bit over more substantial issues in the coming weeks, but the techno-optimist crowd, led by people like Marc Andreessen and Elon Musk, has amassed considerable economic and political power and exerts a wildly disproportionate influence on the discourse, and for those reasons alone, it is essential that we understand this movement and where it's coming from. 

When you take a deep dive into this, you soon arrive at the frightening realization that our supposedly serious discussions of the future are mainly based on adolescent post-war science fiction. (I should probably try to work the vanity aerospace industry in here somewhere.) Arguments about the coming wave of technology based on Saturday morning memories of a 65-year-old cartoon show are absolutely on-brand.

Tuesday, March 18, 2025

Every picture tells a story

 A bit of context. Substantially more people died in the Eaton fire than in the Palisades fire. Far more people were left homeless in Altadena than in Pacific Palisades. In terms of the rest of LA, it was Eaton that came dangerously close to highly populated areas and could have racked up a horrific death toll had things gone differently. 

What happened in Altadena was many times more newsworthy than what happened in Pacific Palisades. There was only one reason to talk mainly about the latter: that neighborhood was where the rich and famous people lived.






Monday, March 17, 2025

"The SAT Gave Me Hope"

Back in 2014, the New York Times published a brief memoir about taking the SAT and how unfair it had been. This was another case of the paper of record bravely leaping on the bandwagon -- almost everyone was going after standardized tests at the time. Absent from the opinion piece was any mention of the fact that the memoirist was attending one of the most elite prep schools in the country (the kind of place where you see names like du Pont on the alumni rolls) when she took the test.

Ten years later, after the tide of public opinion on the test had started to shift yet again, the NYT published another SAT memoir, this time from a woman whose situation and experience were very different. It's definitely a story you should read.

Opinion | The SAT Gave Me Hope
Emi Nietfeld

For many of us, standardized tests provided our one shot to prove our potential, despite the obstacles in our lives or the untidy pasts we had. We found solace in the objectivity of a hard number and a process that — unlike many things in our lives — we could control. I will always feel tenderness toward the Scantron sheets that unlocked higher education and a better life.

Growing up, I fantasized about escaping the chaos of my family for the peace of a grassy quad. Both my parents had mental health issues. My adolescence was its own mess. Over two years I took a dozen psychiatric drugs while attending four different high school programs. At 14, I was sent to a locked facility where my education consisted of work sheets and reading aloud in an on-site classroom. In a life skills class, we learned how to get our G.E.D.s. My college dreams began to seem like delusions.

Then one afternoon a staff member handed me a library copy of “Barron’s Guide to the ACT.” I leafed through the onionskin pages and felt a thunderclap of possibility. I couldn’t go to the bathroom without permission, let alone take Advanced Placement Latin or play water polo or do something else that would impress elite colleges. But I could teach myself the years of math I’d missed while switching schools and improve my life in this one specific way.

After nine months in the institution, I entered foster care. I started my sophomore year at yet another high school, only to have my foster parents shuffle my course load at midyear, when they decided Advanced Placement classes were bad for me. In part because of academic instability like this, only 3 to 4 percent of former foster youth get a four-year college degree.

Later I bounced between friends’ sofas and the back seat of my rusty Corolla, using my new-to-me SAT prep book as a pillow. I had no idea when I’d next shower, but I could crack open practice problems and dip into a meditative trance. For those moments, everything was still, the terror of my daily life softened by the fantasy that my efforts might land me in a dorm room of my own, with endless hot water and an extra-long twin bed.

...

The hope these exams instilled in me wasn’t abstract: It manifested in hundreds of glossy brochures. After taking the PSAT in my junior year, universities that had received my score flooded me with letters urging me to apply. For once, I felt wanted. These marketing materials informed me that the top universities offered generous financial aid that would allow me to attend for free. I set my sights higher, despite my guidance counselor’s lack of faith.

Here's how her story turned out:

Emi Nietfeld is a writer and software engineer. After graduating from Harvard College in 2015, she worked at Google and Facebook. Her essays have appeared in The New York Times, The Rumpus, Vice, and other publications. She lives in New York City with her family.

And here's what we said eleven years ago:

This pool of those likely to gain is quite large. Having gone to a perfectly good but not outstanding public high school in the middle of the country, I can tell you that the best and often the only feasible way for most students to catch the eye of an elite college recruiter (with the possible exception of athletic accomplishment) is through high SAT scores. It is possible for a valedictorian from a no-name high school to get in to an Ivy League school without killer test scores, but they won't be pursued the way students who have broken 700 across the board on the SAT will. For a a lot of middle-class students, the SAT and ACT represent their best chances at a really prestigious school, not to mention the scholarships most Americans need to attend those schools.

Monday, March 24, 2014

SAT winners and losers

One thing I've noticed about the recent calls to end the SAT is that the test is framed entirely as an obstacle. At no point is there any suggestion that some students might have more educational opportunity because of the test. Obviously that can't be true. There is clearly a zero sum aspect to this. When someone bombs their SAT, Harvard does not reduce its admissions by one.

This pool of those likely to gain is quite large. Having gone to a perfectly good but not outstanding public high school in the middle of the country, I can tell you that the best and often the only feasible way for most students to catch the eye of an elite college recruiter (with the possible exception of athletic accomplishment) is through high SAT scores. It is possible for a valedictorian from a no-name high school to get in to an Ivy League school without killer test scores, but they won't be pursued the way students who have broken 700 across the board on the SAT will. For a a lot of middle-class students, the SAT and ACT represent their best chances at a really prestigious school, not to mention the scholarships most Americans need to attend those schools.

This suggest an interesting framework for looking at the likely winners and losers under the current SAT system. Let's define winners as those for whom the potential benefits of a very high score are larger than the potential downside from an average or below score and losers as the opposite.

What would these two groups tend to look like? We have already partially answered this question for the winners. They would come from no name public high schools. They would tend not live near a major academic center such as the Northeast or Central or Southern California (since proximity increases the chance of networking). They would be middle or lower income (or at least low enough for twenty or thirty thousand of annual tuition to be a significant hardship).

How about the losers by the standard? Remember these are people who would gain relatively little from a very high score. That rules out anyone not fairly well to do (most of the rest of us can really use a full ride scholarship). They would probably attend the kind of elite and very pricey prep schools that are expert at getting their students into top universities. They would have the support network of connections and role models that make the application process go much smoother.

We previously discussed the op-ed by Jennifer Finney Boylan. Boylan clearly saw herself as someone who was more likely to be hurt than helped by the SAT so it would be interesting to see how well her background matches the group above. A quick stop at Wikipedia reveals that, though Boylan has overcome many challenges in her life, academic hardship does not appear to have been one of them. At the time her anecdote took place, she was about to graduate from the Haverford School. Haverford is almost a living cliche of an elite prep school, one of those places where the rich and powerful graduate from and send their children to.

It's true that there are ways that people with money can gain an advantage on the SAT. There are, however, considerably more and more effective ways that people with money and position can gain an advantage in all of the other factors used to rank potential college applicants: grades; school standing; extracurricular activities; recommendations; connections; the daunting application process. Students in Boylan's position have massive advantages. You could make the case that, as a high school student competing for a spot in an prestigious university, the only time Boylan had to compete on a roughly even playing field was when she took the SAT and it is worth noting that she resents it to this day.

That said, I don't want to single Boylan out. My concern here is with the insularity of the elites in our society and with the way that certain media outlets, particularly the New York Times, have come to view the world from their vantage point.

Friday, March 14, 2025

At least he didn't call it 'crowd sourcing'

[While while looking for notes I'd made for last week's Hitchcock/Lovecraft post, I search for that author's name and this popped up. I wrote it twelve years ago but for some reason never got around to hitting publish. I've shifted a bit in my feelings about these writers, but for the most part, the following still stands.]  

I came across this piece by Jess Nevins as part of a silly they're-trying-to-suppress-Lovecraft thread on a chat room that will remain linkless. Overall, I thought it was pretty good, but this passage bothered me for a couple of reasons. First, of course, was the trendy and not all that apt use of 'open-source.' Second was how close this comes to, for me, the most interesting part of the Lovecraft phenomena before veering away:

"The question of why Lovecraft gained in popularity after his death and Clark Ashton Smith or Algernon Blackwood did not is slightly more complicated. Lovecraft escaped the fate of the vast majority of writers — obscurity, to a greater or lesser degree — through several extra-literary events. Luckhurst only alludes to Lovecraft’s letter writing, but it was critical in establishing Lovecraft as a literary presence to his contemporaries. Lovecraft was an extraordinary correspondent, writing an estimated hundred thousand letters in his lifetime, to fans and fellow writers, especially those working for the pulp Weird Tales. Decades before the social media, Lovecraft used letter writing to create a presence for himself in the consciousness of fans and writers and to create the social capital that paid off after his death.

"Too, Lovecraft was the first author to create an open-source fictional universe. The crossover, the meeting between two or more characters from discrete texts, is nearly as old as human culture, beginning with the Greeks if not the Sumerians. The idea of a fictional universe open to any creator who wants to take part in it is considerably newer. French authors like Verne [*] and Balzac had created the idea of a single universe linked through multiple texts, and following them, the dime novels and story papers of the late nineteenth and early twentieth centuries had established the idea of ongoing fictional universes, but those universes were limited to magazines published by the original stories’ publishers. It was Lovecraft who first created a fictional universe that anyone was welcome to take part in. Both during his lifetime and immediately afterward, other authors made use of Lovecraft’s ideas and creations in their own stories and novels. Lovecraft’s generosity with his own creations ultimately gave them a longevity that other, better writers’ ideas and characters did not have."
The open-source line seems rather silly both because this sort of thing had been going on for years and because it doesn't really describe the Lovecraft Circle. As for precedents, characters without clear owners (particularly from history, folklore and fakelore) were created by open groups of writers building on each others' ideas. For just one of the examples that preceded the Lovecraft Circle, check out Pecos Bill.

Some Lovecraft scholar might call me on this, but the Circle seemed to have mainly been established (Clark Ashton Smith, Robert E. Howard) or in a few cases promising (Robert Bloch) writers with a previous epistolary relationship with Lovecraft rather than something open to anyone. It was, nonetheless, an extraordinary bit of literary symbiosis organized around Lovecraft's impressive strengths and severe limitations.

Put bluntly, Lovecraft was an otherwise mediocre writer capable of moments of brilliance. His stories often had the impression of being rough drafts or, in the case of The Call of Cthulhu, notes for a story (critic E.F. Bleiler aptly called Cthulhu "a fragmented essay with narrative inclusions"). Lovecraft constantly produced wonderful elements but his finished works tended to be less than the sum of their parts.Writers like Howard could use these ideas to their fullest potential and by doing so could promote Lovecraft far more effectively.

Just to be clear, Lovecraft was a genuinely collaborative spirit, eager to participate and supportive of other writers (qualities probably enhanced by his association with the amateur press movement) and there was more to the Circle than just picking through Lovecraft's ideas. These were gifted people working in a period of great literary exploration that extended from high culture to genre fiction and the Circle's experiment in literary support and cross-fertilization is notable even given the standards of the time.



* Not exactly a consistent one. Thanks to a deeply muddled chronology, Mysterious Island takes place both before and after Twenty Thousand Leagues.

Thursday, March 13, 2025

I tried to come up with a snarky title, but I was hard-pressed to top "timeless appeal" -- another entry in our Adventures in IP thread

Not the worst IP proposal for a universe we've seen recently (remind me to talk about the return of Atlas Comics), but probably the most embarrassing to come out of a major production house (Legendary is the company behind, among other things, Dune).

 From Reactor:

Akiva Goldsman, the writer and producer involved with myriad film and TV projects including the recent Star Trek shows, Doctor Sleep, I, Robot, and Batman & Robin, is setting his eyes toward Irwin Allen’s 1960s sci-fi television shows Voyage to the Bottom of the Sea, Land of the Giants, and The Time Tunnel.

According to Deadline, Goldsman is working with Legendary Television to create “a unified vision for these stories, bringing modern sensibilities to their timeless appeal, and expanding upon his success in revitalizing the Star Trek universe.”

 

For those of you who have not wasted large parts of your lives watching second-rate movies on bad television, Irwin Allen was a producer mainly active in the '60s and '70s. He became famous for producing a string of disaster films, starting with The Poseidon Adventure—probably his best film—and The Towering Inferno. Today, however, he is best known for producing Lost in Space, a show that is largely remembered because a guest star, Jonathan Harris, decided to start rewriting his lines in a successful effort to avoid being written out of the show. (Harris's account has been confirmed by numerous sources, including his co-star Billy Mumy.)

Lost in Space was the second of four science fiction shows Allen produced in the '60s, the other three being Voyage to the Bottom of the Sea (based on a profoundly silly but successful feature film that Allen produced and directed), The Time Tunnel, and Land of the Giants. The shows were not, by any standard, a "universe," but thanks to Allen's heavy recycling of sets, props, and costumes—and his already notorious overreliance on stock footage—they all tended to look the same.

The drive to accumulate intellectual property with any name recognition whatsoever has been going on for decades, particularly on the part of Disney and Warner Bros. This kicked into even higher gear with the emergence of the Marvel Cinematic Universe, for at least a few reasons. First, of course, were the billion-dollar-plus box office takes of these movies. Perhaps more importantly, though, it showed that lesser IP could be turned into major franchises. (Characters like Iron Man and Thor were decidedly second-tier before the MCU made them stars. The only reason Marvel gave them a chance was because the company had previously sold off the rights to their biggest names—the Fantastic Four, the X-Men, and Spider-Man.) Finally, the "universe" approach offered incredible opportunities for synergy. The critical and commercial success of the reimagined Battlestar Galactica added to the excitement.

So far, no attempt to recapture that magic has succeeded. DC has had decidedly mixed results. The Universal MonsterVerse basically died in its crib. The attempt to tie all of the Bond films together with Spectre got an overwhelmingly hostile reaction. Even Marvel has stumbled trying to come up with a second act after the Infinity Saga. Nonetheless, studios continue to dig up properties that are more and more obscure—and less and less suitable for revival.

The non-Lost in Space Irwin Allen shows are around 60 years old and were largely forgotten by the end of the decade. There are no plot elements or characters that lend themselves to adaptation, nor is there anything about the shows that suggests they should share a common universe.

 Is it possible to make a hit, even a hit that also happens to be a genuinely good piece of work? Of course it is. Have talented people in front of and behind the cameras saved studios from the consequences of bad executive decisions? Countless times. But make no mistake, that sound you hear in the background when you read a story about someone acquiring an increasingly obscure piece of IP is the scraping of the bottom of the barrel.

Wednesday, March 12, 2025

How can we possibly hope to rebuild after losing more than one half of a percent of our housing?

 Two months later and the Wall Street Journal is still pushing the drama (not to mention a couple of its favorite narratives).

Rebuilding Los Angeles Is California’s Economic Moment of Truth

As I've said before, I don't want to minimize the loss and personal tragedy experienced by the residents of Altadena and Pacific Palisades, but this has just gotten silly.

This is not San Francisco after the earthquake. The combined population of the unincorporated L.A. County community of Altadena and the L.A. City neighborhood of Pacific Palisades was somewhere around 65,000, with about two-thirds of them living in the Altadena. That's a lot of people experiencing a great deal of human suffering, but we have to keep some sense of proportion. Los Angeles County has around 10 million people. 

Other than four or so days of bad air and some admittedly scary reporting around the possibility of things spreading into much more densely populated areas, the fires had no direct impact on the vast majority of Angelenos. Other than some burn scars on some of the mountains, most of us haven't even seen any signs of the fires. To say that life goes on would be a massive understatement.

It would take some work to check, but it certainly seems like the press forgot about Hurricane Helene faster than they forgot about the L.A. fires, despite that storm killing hundreds and leaving hundreds of thousands displaced, though, in defense of the editors and reporters covering that other story, almost none of those houses belonged to movie stars.

Tuesday, March 11, 2025

Note to Elon: When people are accusing you of being a Nazi, blaming it on a George Soros conspiracy may not be the smartest move.

It's not often that I recommend a Motley Fool article over a Lawyers, Guns & Money post, particularly not one by Scott Lemieux, but these are strange times.

The markets, in general, have been doing horribly as of late, but Tesla is in a class of its own. Yesterday, the stock slid more than 15%, bringing it down more than 40% year to date. Most of the coverage, including the LGM piece, has stopped there, and if that were the whole story, the situation wouldn't be as bad as it sounds for Elon Musk, whose fortune overwhelmingly depends on the value of Tesla stock. Even with the ugliness of the past couple of months, the stock is still up almost 25% for the year. I'm sure that as you read this, there are Musk fans out there arguing that things are just returning to normal after the spike that followed the news of the Musk-Trump alliance and that things will level off when the company goes back to trading on fundamentals.

There are, however, two numbers that most writers are ignoring—numbers that have very disturbing implications for investors in the company.

108 and 21.

Tesla is a mature company. If it were a person, it could legally buy its own drinks. It also, even after yesterday's drop, has a price-to-earnings ratio over 100. That's about four times as much as you would expect from a mature tech company like Amazon or Facebook. It's about ten times as much as you would expect from a car company. Tesla is still priced like a start up.

Musk tried to reassure investors by telling them he believed a 1,000% increase in profits was possible over the next five years. Even with the "possible" qualifier, this would be an awfully ambitious claim, even for a company that was doing well. But Tesla's vehicle delivery numbers actually shrank in 2024, and 2025 was already looking to be worse before the head of the company—and retroactive founder—aggressively started destroying the brand. The company has a tiny number of badly received new products in the pipeline. The market is becoming more competitive, particularly with the rise of BYD. 

Now add on the branding disaster. Tesla owners are so embarrassed that they've been seen rebadging their once-trendy sedans and SUVs with the logos of other, less controversial carmakers. "Swastikar" is trending and producing some wonderful social media content—and some very pissed-off responses from fanboys. An ad campaign in Britain features the slogan: "Goes from 0 to 1939 in 3 seconds."

Musk's reaction may not be helping.

Unsurprisingly, Musk has disparaged Tesla Takedown as something astroturfed by wealthy liberals (and conflated the peaceful movement with attacks in which suspected arsonists threw Molotov cocktails at dealerships). On Saturday, he claimed without evidence on X that an “investigation” of some kind had revealed that wealthy philanthropists including George Soros, Reid Hoffman, and Leah Hunt-Hendrix were funding the groups behind Tesla protests through the Democratic fundraising platform ActBlue. Aside from misrepresenting how ActBlue works — it doesn’t fund anything but is rather a system through which donors can send money to Democratic political candidates and liberal organizations — Musk also pointed the finger at two more supposed financiers of Tesla Takedown, Herbert Sandler and Patricia Bauman, who died in 2019 and 2024, respectively.  

[Hoffman's take on Musk is especially insightful.]

But perhaps the most remarkable/saddest part of Elon Musk's 1,000% growth claim is that even if he could do it, it would only bring Tesla's market cap in line with other auto manufacturers. It's a situation where you need to win the lottery just to break even.


Monday, March 10, 2025

Ten years ago at the blog, March was pretty much Mars One month

It has largely been shoved down the memory hole, but in 2014 and 2015, virtually every major news organization was credulously reporting on a laughably obvious scam project that claimed it was on the verge of setting up a self-sustaining Martian colony funded by a reality TV show. The story was a triumph of lazy, sensationalistic journalism, fertilized by silly techno-optimism and Muskian bullshit about saving the planet by leaving.

Mars One has ended up where every reasonable person knew it would, but it's still worth remembering, if only to remind us to be more skeptical about the next "next big thing."

 

Monday, March 9, 2015
More on Mars One -- I expect this from ABC News but Sheldon?

Wednesday, March 11, 2015
More Martian musings – – reality shows and diet pills 

Tuesday, March 17, 2015
The Mars conversation I'd like to be having

Wednesday, March 18, 2015
Another Mars One update

Thursday, March 19, 2015
Elmo Keep's other must-read on Mars One  

Tuesday, March 24, 2015
Rocket Boosters 

Wednesday, March 25, 2015
At this point, the only things Mars One is likely to get into orbit are the wheels coming off of the bus

 Friday, March 27, 2015
Against a stupid meme, the gods themselves contend in vain 

Monday, March 30, 2015
No part of this project is credible 

Tuesday, March 31, 2015
I realize we're entering dead horse territory here -- UPDATED

These posts relied heavily on the reporting of Elmo Keep (for example). It's only fitting she has the last word.

Mars One Ends In Bankruptcy
by Elmo Keep
Feb 14, 2019

 

 

 

 

 

 

Friday, March 7, 2025

XKCD on DST


From XKCD


Monroe also suggests an alternate solution in the title text, averaging out the spring and fall changes and setting clocks 39 minutes ahead year-round. Commenting on this cartoon, the good people at Explain XKCD point out an unexpected benefit of the current system: in at least two separate cases, terrorists have blown themselves up due to failure to spring forward or fall back.




While Monroe's ideas are interesting, I'm still pushing for the plan where we set the clocks back one hour twice a month. Twenty-four slightly longer weekends and everything comes out even by the end of the year.

Thursday, March 6, 2025

Have we mentioned how unfair it is to single out Arthur Conan Doyle?

Yes, I'll admit that falling for those fairy pictures was pretty embarrassing, but Doyle had lots of company in his belief in the paranormal, including some remarkably distinguished late 19th and early 20th century names.

Brigit Katz writing for Mental Floss:

Marie and Pierre Curie readily admitted that nature was rife with mysteries that scientists had yet to identify and study. “[W]e know little about the medium that surrounds us, since our knowledge is limited to phenomena which can affect our senses, directly or indirectly,” they wrote in 1902, acknowledging that they did not fully understand the origin of radioactive energy.

Pierre was particularly fascinated by the paranormal. Introduced to spiritualism by his brother, the scientist Jacques Curie, he confessed in an 1894 letter to Marie that “those spiritual phenomena intensely interest me.” He believed that the paranormal realm was relevant to “questions that deal with physics,” and according to biographer Anna Hurwic, thought that spiritualism might uncover “the source of an unknown energy that would reveal the secret of radioactivity.”

... 

Just a few days before his death in 1906, Pierre wrote again to Gouy describing the last Palladino séances he would ever witness. “[T]hese phenomena really exist and it is no longer possible for me to doubt it,” he proclaimed. “There is here in my opinion, a whole domain of entirely new facts and physical states in space of which we have no conception.”

Marie does not appear to have been as intrigued by Palladino as her husband, according to Susan Quinn, author of Marie Curie: A Life. She had other demands on her time and energy, including her two young children and the intense public attention that followed her Nobel Prize win. But at the very least, Marie doesn't seem to have come away from Palladino’s séances as a firm disbeliever in the possibility of a spirit world—because after Pierre died, she continued to communicate with him.

 

Curie had company. William Crookes and fellow Nobel laureate Charles Richet were two of the many scientists who devoted a large part of the time to paranormal research. Albert Einstein wrote the preface to Mental Radio. Respected publications published serious discussions about psychic and supernatural phenomena. Not everyone was a believer, but it was rare to find people in the "absolutely, positively no-way" camp either.

Nor were the Curies the only ones to make a connection between the fantastic scientific discoveries of the day and the possibility of the paranormal. When Edison stumbled on the first radio transmitter back in the 1870s (that’s another story we need to get into), he wondered if it might explain telepathy. When the later debunked N-rays were announced, scientists speculated that they too might explain psychic phenomena.

When you are constantly bombarded with news of the seemingly magical—be it cylinders that capture voices, strange rays that let doctors see through flesh, or communications that somehow travel instantly through the air—it would probably seem foolhardy to prematurely dismiss the possibility of something outside of our range of knowledge.

We, on the other hand, have two large bodies of knowledge to draw on when evaluating claims of the supernatural. First, we have well over a century of paranormal research where, when you take out the frauds and the fatally flawed test designs, you find no support whatsoever for these claims. Second, we have a century of research into physics that has effectively ruled out any possibility of these forces causing ghosts to walk or psychics to tell you anything that couldn't be learned from a cold reading.

You and I aren't smarter than Einstein  or the Curies, but we do know a bit more.


Wednesday, March 5, 2025

Clive James's example may be god-awful, but what about the point he was trying to make?

 Picking up on our discussion of this claim:

The best Hitchcock film was directed by someone else. Charade would not be as good as it is if Hitchcock had not developed the genre it epitomises, but Hitchcock could never have created a film so meticulous, plausible, sensitive, light-footed and funny.

Everyone now seems to agree that "the best Hitchcock film" is, at best, quite a stretch, but what about the broader claim that a [blank] film/book/song might be the work of someone other than [blank]?

There are lots of examples where imitations were better than the original and where plagiarists, from Robin Williams to Shakespeare, put out work superior to what they were imitating, but that's not exactly what we're talking about here. 

In this context, the term "Hitchcock film" effectively defines a subgenre (think North by Northwest—with the important caveat that not every film Hitchcock made qualifies as a Hitchcock film by that standard.

Saying someone defines a subgenre is a bit of a left-handed compliment. Obviously, you have to be successful and influential to get there, but that success and influence largely almost always exist within a larger genre. It also suggests that someone else could do it. While Charade is a silly example, it’s not that difficult to imagine someone else theoretically making a better Hitchcock film than Hitchcock. I don’t think you could talk about a Kubrick film in the same way. That said, it is worth noting that countless enormously talented filmmakers—in some cases, arguably more talented than Hitchcock himself—have tried their hands at the subgenre and, as far as I can tell, have all fallen short. François Truffaut, Roman Polanski, and Brian De Palma all come to mind.

What about in other media? An Agatha Christie mystery would certainly qualify as one of these personal-brand subgenres, and we could probably find someone to argue that Ngaio Marsh wrote better Christie books than Christie did (I’m not taking a position on this one, I'm just saying someone might) but it's a difficult point to argue. I would be more than willing to make the case that Dorothy L. Sayers wrote better novels, but here we get into one of the big problems with "better [blank] than [blank]" claims: if you improve too much on the original, at some point, it ceases to be a [blank] work. (Tellingly, if probably unintentionally, when Kenneth Branagh wanted to make Hercule Poirot more modern and three-dimensional, he did so by giving him the backstory of Lord Peter Wimsey.) Sleuth also comes to mind. It plays with the conventions of an Agatha Christie story but mainly to subvert them.

If you're good enough to have a subgenre named after you, usually you are good enough to outshine your imitators, but I finally came up with an exception—one so obvious I don't know why it took me so long to think of it. A writer whose very name is a widely used adjective, arguably one of the most influential writers of the 20th century, and yet someone who was routinely outdone at his own game.

H.P. Lovecraft wasn’t a very good writer. There were good, even sometimes great, elements in his stories, but the stories themselves never rose above mildly inept. I went back and reread some Lovecraft, starting with Dagon, and with the exception of a few passages, it took me back to my days teaching junior high English.

We won’t even get into the racism and anti-Semitism.

Lovecraft's writing often comes across as a crude first draft of what could be a very good piece of fiction in the proper hands, which may be why we saw an extraordinary group of talented writers picking up his ideas and running with them—even as he was still writing.

Although the Mythos was not formalized or acknowledged between them, Lovecraft did correspond, meet in person, and share story elements with other contemporary writers including Clark Ashton Smith, Robert E. Howard, Robert Bloch, Frank Belknap Long, Henry Kuttner, Henry S. Whitehead, and Fritz Leiber—a group referred to as the "Lovecraft Circle".[16][17][18]

Everyone named in that paragraph was a much better writer than H.P. Lovecraft, and it is because of them—and the others who followed—that his works are better remembered today than The Great God Pan or the stories of Lord Dunsany.

 

Tuesday, March 4, 2025

A Blacker Black Box

From Matt Levine's newsletter:

There are two basic ways to use artificial intelligence to predict stock prices:

  1. You build a deep learning model to predict stock prices: You set up a deep neural net, you feed it tons of historical data about stocks, and you train it to figure out how that data predicts stock price returns. Then you run the model on current data, it predicts future returns, and you buy the stocks that it thinks will go up.
  2. You take some deep learning model that someone else built, a large language model, one that is good at predicting text. It is trained on a huge corpus of human language, and it is good at answering questions like “write a poem about a frog in the style of W.B. Yeats.” And you ask it questions like “write a report about whether I should buy Nvidia Corp. stock in the style of Warren Buffett.” And then it trains on the writing style of Warren Buffett, which reflects his thinking style, and its answer to your question — you hope — actually reflects what Buffett might say, or what he might say if he was a computer with a lot of time to think about the question. And because Warren Buffett is good at picking stocks, this synthetic version of him is useful to you. You read the report, and if robot Warren Buffett says “buy” you buy.

The first approach makes obvious intuitive sense and roughly describes what various quantitative investment firms actually get up to: There might be patterns in financial data that predict future returns, and deep learning is a statistical technique for finding them.

The second approach seems … sort of insane and wasteful and indirect? Yet also funny and charming? It is an approach to solving the problem by first solving a much harder and more general problem: Instead of “go through a ton of data to see what signals predict whether a stock goes up,” it’s “construct a robot that convincingly mimics human consciousness, and then train that robot to mimic the consciousness of a particular human who is good at picking stocks, and then give the robot some basic data about a stock, and then ask the robot to predict whether the human would predict that the stock will go up.” 

My impression is that there are people using the first approach with significant success — this is roughly, like Renaissance Technologies — and the second approach is mostly me making a joke. But not entirely. The second approach has some critical advantages:

  1. Somebody else — OpenAI or xAI or DeepSeek or whoever — already built the large language model for you, at great expense. If you are on the cutting edge of machine learning and can afford to pay for huge quantities of data and researchers and computing capacity, go ahead and build a stock-predicting model, but if you are just, say, an academic, using someone else’s model is probably easier. The large language model companies release their models pretty widely. The stock model companies do not. You can’t, like, pay $20 a month for Rennaissance’s stock price model.
  2. Because the large language model’s output is prose, its reasoning is explainable in a way that the stock model is not. The stock model is like “I have looked at every possible combination of 100,000 data time series and constructed a signal that is a nonlinear combination of 37,314 of them, and the signal says Nvidia will go up,” and if you ask why, the model will say “well, the 37,314 data sets.” You just have to trust it. Whereas robot Warren Buffett will write you a nice little report, with reasons you should buy Nvidia. The reasons might be entirely hallucinated, but you can go check. I wrote once: “One criticism that you sometimes see of artificial intelligence in finance is that the computer is a black box that picks stocks for reasons its human users can’t understand: The computer’s reasoning process is opaque, and so you can’t be confident that it is picking stocks for good reasons or due to spurious correlations. Making the computer write you an investment memo solves that problem!”
  3. I do think that the aesthetic and social appeal of typing in a little box to have a chat with your friend Robot Warren is different from the black box just giving you a list of stocks to buy. This probably doesn’t matter too much to rigorous quantitative hedge funds, but it must matter to someone. We talked last year about a startup that was launching “a chatbot that offers stock-picking advice” to retail brokerage customers, and it seemed like the goal of the project was not “the chatbot will always tell you stocks that will go up” but rather “the chatbot will offer a convincing simulacrum of talking to a human broker,” who also will not always tell you stocks that will go up. You call the broker anyway. Now you can text the chatbot instead.

And so we also talked last year about an exchange-traded-fund firm that would use large language models to simulate human experts — ones with characteristics of particular humans, like Buffett — to make stock picks. Why use LLMs rather than build a model to directly predict stock prices? Well, because the LLM is already there, and the data is already there, and the schtick is a little more human than “here’s our black box.”

Anyway here’s a paper on “Simulating the Survey of Professional Forecasters,” by Anne Lundgaard Hansen, John Horton, Sophia Kazinnik, Daniela Puzzello and Ali Zarifhonarvar:

Though Levine does a characteristically great job laying out the questions in a clear and insightful way, on at least one point, I think he's not just wrong, but the opposite of right. The LLM may appear to be less opaque, but it is actually the blacker black box.

Normally, when we use the term "black box model," we mean that we know the data that goes in and can see the scored data that comes out, but the process by which it is arrived at is so complex and computation-intensive that we can't say exactly what happened. However, in practice, that's not entirely true. We can analyze the output, identify the main drivers of the model, and flag potential biases and other problems. We can perturb the input data, leaving out certain parts, and observe how the output is affected. In most real-world cases I've seen, you can reverse-engineer the model, creating something remarkably close that uses a manageable and, more importantly, comprehensible dataset and series of calculations. This simpler, reverse-engineered model won't use the same data as the black box, but it will be transparent, will very likely use the same categories of data and generally capture the underlying relationships and sometimes perform almost as well.

I have never done anything related to stock prediction, but I have worked with models predicting consumer behavior, and I'm betting that the underlying process is somewhat similar. Let's take the example of a credit card company building a black-box model to predict which customers are likely to default on their debts. In addition to transaction and payment history, the company has access to a huge amount of data from credit bureaus, vendors such as Acxiom, publicly available information, and macroeconomic data. We're talking about tens of thousands of variables going into that model. It is not possible for a person or even a team to go through all of these fields one by one, but at a more general level, it is possible to know what kind of data is going in and to maintain some standard for quality and relevance.

If your training data is everything that can be scraped from the internet, it is effectively unknowable. In the traditional black-box scenario, we know the data and the output; only the middle part of the process is opaque. With large language models, however, everything before the final answer is shrouded in darkness.

Your training data may include the writings of Warren Buffett, the text of A Random Walk Down Wall Street, and the archives of The Wall Street Journal, but it can also contain horoscopes, blogs from "buy the dip" Robinhood day traders, and market analysis from crypto investors. The style and word choice might resemble those of the Oracle of Omaha, but the underlying ideas might come from the Rich Dad Poor Dad guy.

 

Monday, March 3, 2025

The Grandiosity/Contribution Ratio -- another newly relevant repost

One of the recurring threads at the blog for years now has been the Lords of Ithuvania, the way we have collectively treated people who stumbled upon huge fortunes in the tech industry as super-capable, often almost Messianic figures who can solve any problem and who go unchallenged when making even the most delusional boasts—like claiming they can cure all diseases. That myth is now proving extraordinarily costly.

Tuesday, January 23, 2018

The Grandiosity/Contribution Ratio

From Gizmodo [emphasis added]
Zuck and Priscilla laid out the schematics for this effort on Facebook Live. The plan will be part of the Chan Zuckerberg Initiative and will be called simply “Chan Zuckerberg Science.” The goal, Zuck said, is to “cure, prevent, or manage all diseases in our children’s lifetime.” The project will bring together a bunch of scientists, engineers, doctors, and other experts in an attempt to rid the world of disease.

“We want to dramatically improve every life in [our daughter] Max’s generation and make sure we don’t miss a single soul,” Chan said.

Zuck explained that the Chan Zuckerberg Initiative will work in three ways: bring scientists and engineers together; build tools to “empower” people around the world; and promote a “movement” to fund science globally. The shiny new venture will receive $3 billion in funds over the next decade.
...

“Can we cure prevent or manage all diseases in our children’s lifetime?” Zuck asked at one point. “This is a big goal,” he said soon after, perhaps answering his own question.

Obviously, any time we can get some billionaire to commit hundreds of millions of dollars a year to important basic research, that's a good thing. This money will undoubtedly do a tremendous amount of good and it's difficult to see a major downside.

In terms of the rhetoric, however, it's useful to step back and put this into perspective. In absolute terms $3 billion, even spaced out over a decade, is a great deal of money, but in relative terms is it enough to move us significantly closer to Zuckerberg's "the big goal"? Consider that the annual budget of the NIH alone is around $35 billion. This means that Zuckerberg's initiative is promising to match a little bit less than 1% of NIH funding over the next 10 years.

From a research perspective, this is still a wonderful thing, but from a sociological perspective, it's yet another example of the hype-driven culture of Silicon Valley and what I've been calling the magical heuristics associated with it. Two of the heuristics we've mentioned before were the magic of language and the magic of will. When a billionaire, particularly a tech billionaire, says something obviously, even absurdly exaggerated, the statement is often given more rather than less weight. The unbelievable claims are treated less as descriptions of the world as it is and more incantations to help the billionaires will a new world into existence.

Perhaps the most interesting part of Zuckerberg's language here is that it reminds us just how much the Titans of the Valley have bought into their own bullshit.

 

Friday, February 28, 2025

More thoughts on criticism -- critics and reviewers

 Both reviewer and critic are honorable professions, with, if anything, the former being more valuable. The best reviewers by a wide margin used to be the team that put together Maltin's Movie Guide. As far as I can tell, no one has stepped up since to fill the gap, another case of the internet crowding out quality with crap.

The purpose of criticism is to deepen our understanding of a work and explore its connection to larger themes, genres, social issues, politics, psychology, etc. The primary purpose of reviews is to let people know whether or not they might like a movie, book, restaurant, or whatever.

It follows that the target audience of reviews is people who have not seen or heard the work in question. This doesn't mean that people won't sometimes seek out reviews after watching, reading, or listening to the work in question. Many of us like to compare our reactions to those of people who get paid to do this, but the reviews themselves are virtually never written for this segment.

It also follows that criticism is almost always most meaningful when the reader knows at least something about the subject. Here too, there can be some gray areas, particularly when the work in question is widely known or when it connects to larger questions about other topics. The book and the movie The Grapes of Wrath would qualify under both of these criteria.

Pauline Kael was probably our best movie critic, and yes, I know I'll get some pushback on that one, but historically that pushback has mainly come from people who strongly disagree with her assessment of various movies, which is understandable since Kael was also a terrible reviewer. If you try to boil down her thoughts about a film to "this part was good, this part was bad," she would seem arbitrary and erratic. The rule of thumb for reading Kael is that long is almost always good and short is generally bad. For this reason, literally, the last book of Kael's you should read is 5,001 Nights at the Movies, which unfortunately seems to be her most popular title.

Though heavily influenced by Pauline Kael, Roger Ebert was mainly a reviewer. Almost all newspaper movie critics are. It is difficult to be anything else, given the space constraints that almost all of them work under. Bob Chipman is one of those rare examples of a good critic who is also a good reviewer. Denby is a better critic than reviewer. The new guy at the New Yorker is a competent reviewer and a lousy critic, but we'll get to him in a future post.

It is important for a reviewer to have good or perhaps even more to the point, predictable tastes. This is particularly true when the reviews break with what you'd expect. A show like The Crown getting good notices tells you virtually nothing — it was all but grown in a lab to push reviewers' buttons. On the other end of the spectrum, when Siskel and Ebert both gave thumbs up to the over-the-top sex and gore of Re-Animator or when pretty much every review singled out William Fichtner's performance in the Grindhouse homage Drive Angry, you pretty much know you've got something good.

(Fichtner is one of the most reliable and underrated actors in Hollywood, so this was always pretty much a safe bet, but seriously, he is wonderful as the coolest satanic emissary you'll ever regret running across.)

With criticism, predictable or compatible tastes are often completely unnecessary. The director Barry Sonnenfeld likes Jerry Lewis comedies. I find them difficult to choke down. I do, however, enjoy listening to Sonnenfeld explain why he admires Lewis, and hearing him discuss those films that I don't care for deepened my appreciation of a number of comedies I am very fond of, including Men in Black.