One. The AI stock bubble. Analogous to, but far larger than, the dot-com bubble. The main thesis here is that, regardless of the future economic impact of AI in general or large language models in particular, the current state of affairs around companies like Anthropic, OpenAI, xAI, and Meta is unsustainable, that there is virtually no chance of the industry becoming profitable enough quickly enough to justify the level of spending, especially associated with data centers, before bills come due and the current generation of chips becomes obsolete. The essential follow for this story has long been Ed Zitron.
Two. The AI hype bubble. Closely related to the AI stock bubble, with a strong symbiotic relationship. You could argue that one cannot exist without the other, but they are still very much distinct. Though it is sometimes couched in pseudo-technical terms, the hype bubble is largely magical, with claims ranging from the silly to the literally apocalyptic.
Three. Benefits and costs. If we limit ourselves strictly to those things that we know large language models are currently capable of, which ones are likely to be economically or socially useful, and which are likely to cause (or already have caused) substantial harm? Examples of the former include coding tools; the ability to extract information from text-based data at scale; and better interfaces for interacting with various devices. The latter includes (and this is a very partial list) creating severe mental health crises; polluting research papers, legal documents, and historical records with false information; degrading the quality of computer code and security systems; drowning original, high-quality writing and art in a sea of AI slop; crowding out potentially more valuable research; taking a huge toll on the environment; and allowing disinformation to be generated at an unprecedented scale.
It is worth noting that there is almost no overlap between the doomsters in the hype bubble and the critics raising the concerns mentioned here.
And sort of...
Four. Where do we go from here? This isn't really part of the bubble conversation, but it is adjacent. The current AI discourse is a profoundly dysfunctional. It is an ugly mass of hyperbole, misinformation, and badly framed conversations. Journalists covering the topic, more often than not, have no understanding of what they're talking about. Statements from the companies developing these models are, at best, based on questionable metrics and assumptions, and, at worst, are borderline lies. Much of the research (arguably most of what makes its way into The New York Times or The Atlantic) is sensationalistic, anecdotal, and terribly thought out.
But we can't and shouldn't simply dismiss large language models and their potential for positive impact. They represent the biggest breakthrough we've ever seen in natural language processing. The changes they have wrought to the way we code are already substantial and will only grow bigger. Our ability to analyze and classify text at scale will have huge implications for countless fields of study.
All of this means that it is imperative that we improve our understanding of this technology and have a serious, productive conversation about how best to use it while limiting the damage it can do. Unfortunately, that's not the conversation we are having.


No comments:
Post a Comment