[If you want to get picky, that's Talky Tina.*]
The Talking Tina fallacy is the tendency to ascribe sentience, emotions—and often supernatural properties—to any non-human animal or object that uses language, be it a trained parrot, a mechanical device, or a piece of electronics.
LLMs were all but designed to play on the Talking Tina fallacy. Even when the responses are total gibberish, they still have the superficial indicators of language proficiency. The word choice seems appropriate. The phrasing is natural. The rhythms and alliteration have a familiar feel.
These moments where we get the linguistic form captured perfectly but with no underlying sense should be a reminder of what the algorithm is—and is not—doing: that each word or phrase is being generated not based on meaning but simply on patterns in the training data.
Even here, though, our human compulsion to project makes us see things that aren't there. The very term “hallucination” implies that something different is going on—that these nonsensical answers represent some kind of malfunction in the model. But the process that tells us that the square root of 4 is an irrational number is exactly the same as the process that tells us the square root of 2 is.
Human communication starts with meaning. We then try to find words to convey those ideas, information, feelings. LLMs start with generating words that match the phrasing and patterns found in the data they were trained on, and quite often, by getting the form correct, they also produce what we as humans see as meaning. It is entirely something we are projecting upon the algorithm’s responses—but that doesn't necessarily make it valueless, as long as we understand the almost diametrically opposed approach that the computer is taking compared to what we would do.
Right now, LLMs are novelties of highly limited use in the real world.
If we are to move beyond that and actually take advantage of what might
be the biggest advance in natural language processing ever, we need to
have a clear mental model of what they're doing—and how we can or can't
make use of them.
If we have a clear and detailed understanding of what's going on, we can find all sorts of ways to use these wonderful tools. More importantly, we can know when not to use them.

No comments:
Post a Comment