Wednesday, March 18, 2026

Blah blah Ginger blah blah blah

The following quote comes from the distinguished computer scientist Yann LeCun, but I've heard others in the field say virtually the same thing. This trouble is it's wrong. Clearly, demonstrably wrong.

 We want [language] to be complicated because we think of it as uniquely human; it’s what makes us humans superior to other animals.


Not only is this an unsupported assertion presented as a self-evident fact, it is obviously contradicted by examples so widely known that all of us have seen them.

When presented with any signs of language use in animals or machines, the natural human tendency is to overestimate the underlying linguistic and reasoning processes. We've already talked about the Talking Tina effect, but a far more familiar example is that of dogs. These animals can learn to recognize specific words much in the way they can learn to recognize the sound of a can opener or of a leash being taken down from a hook.

Now ask yourself: which is far more likely to happen—will a dog owner overestimate or underestimate their pet’s level of comprehension? If these people wanted to think of language as “uniquely human,” they wouldn't be talking to their pets in full sentences and frequently insisting that the animals understand or even recognize more than a handful of words.

We'll call this the Ginger effect, referring to that great The Far Side cartoon.





The very act of anthropomorphizing undercuts our sense of superiority, and yet we do it all the damned time.

LeCun wrote this comment in 2012. It wasn't convincing then but the events of the years since have rendered it laughable. 

Not only have recent breakthroughs in Natural Language Processing confirmed the Ginger effect, they have taken it to startlingly high levels, often with disturbing implications. What we've learned recently is that not only are people ready, even eager, to accept the idea that a machine can use language, they also have a tendency to project upon these machines all sorts of human qualities such as intelligence, insight, empathy, and motivations.

Depending on your tolerance for anecdotal data, we have lots of well-documented cases of people forming relationships with chatbots that are so intense as to lead to severe depression, isolation, psychotic breaks, criminal acts, and even suicide. Admittedly, in absolute terms those numbers are still fairly small, but that's not the case with people using the technology as a substitute for personal and even romantic relationships. Those numbers are alarmingly high.

It will take years of psychological and sociological research to definitively say what’s going on here, but there seems to be little doubt that many of these people—possibly most—believe on some level that they are in a relationship with some degree of emotional reciprocation with a computer.

Scientists, journalists, and pundits have spent a couple of years now on largely unproductive speculation about whether LLMs have displayed intelligence or emotions, when we should instead be talking about the far more immediate questions: what are the best applications, and what are the most worrying unexpected consequences of a massive step forward in computers’ ability to use and process language?


1 comment:

  1. "It will take years of psychological and sociological research to definitively say what’s going on here, but there seems to be little doubt that many of these people—possibly most—believe on some level that they are in a relationship with some degree of emotional reciprocation with a computer."

    I don't think there is currently any progress at all in this direction. Psychologists are really excited about getting chatbots on the couch! I see a burgeoning willful blindness about the lack of humanity in LLMs. There seems to be a collective glazing over in the face of skeptics who mention things like guessing the next token. Instead, we get long-winded essays on the agency and sapience of chatbots.

    ReplyDelete