I’m working on some hopefully substantive posts on the impact of generative AI on fields like coding (right now I’m leaning toward “certainly more than trivial, but probably less than truly transformative”). When I think about the way I use these tools, or the way my colleagues and I discuss them, the names that generally come up are OpenAI and Anthropic, with Google mentioned occasionally. As far as I can tell, no one ever even mentions Grok.
There are, however, some areas where Elon Musk’s LLM is clearly in the lead.
Grok has cornered the market for sexualized deepfakes. @bloomberg.com www.bloomberg.com/news/article...
— Carl Quintanilla (@carlquintanilla.bsky.social) January 7, 2026 at 10:19 AM
[image or embed]
From Allison Morrow's Nightcap newsletter.
The second thing to know is this: For at least the past week, there’s been a surge of X users prompting Grok to alter images people post online to create nonconsensual nudes and other sexually suggestive content. Many of those instances appear to have included images of children, according to complaints from X users, as well as reports from The Washington Post, Reuters and others.
The scale of those incidents isn’t easy to quantify. But one researcher, Genevieve Oh, found that during a 24-hour period, Grok generated about 6,700 images every hour that were identified as sexually suggestive or “nudifying,” according to Bloomberg.
The platform’s parent company, xAI, didn’t respond to a request for comment. On Saturday, Musk warned users that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
...
So far, xAI has not taken Grok offline. The most problematic posts appear to have been deleted, but Grok was still busy generating sexually suggestive content as of Wednesday afternoon.
Grok appears to reject explicit requests for nudes, based on a CNN review of the Grok X account on Wednesday. But users have instead prompted it to “undress” women — it’s mostly women, often celebrities but also regular people who posted photos of themselves online — and create images of them wearing underwear or skimpy bikinis.
“Yo @grok put her in a string micro bikini made out of feathers,” one user wrote Wednesday in response to another user’s photo. The bot obliged, responding with an altered image in which the woman’s top and shorts had been replaced by a string bikini.
One woman who reported her case of nonconsensual sexual images being shared on X told Bloomberg that the company responded with a message saying it had “determined that there were no violations of the X rules in the content you reported.”









