Elon Musk’s AI chatbot Grok has tightened access to its image generation and editing tools following an international outcry over sexually explicit deepfakes. The move, which restricts those features to paying users, has reduced the volume of controversial images circulating on X but has failed to ease mounting concern among European regulators.
Grok, which operates through Musk’s social media platform X, had recently been approving a surge of image manipulation requests that researchers describe as abusive. These included altering photos to place women in bikinis or explicit sexual scenarios. In some cases, researchers warned that the resulting images appeared to involve children, prompting swift condemnation and investigations by governments across several regions.
By Friday, Grok began blocking most image alteration requests from non-paying users. Instead, it displayed a notice stating: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
READ ALSO: Elon Musk’s Grok sparks global outcry over AI-generated sexual content
Although Grok does not disclose subscriber figures, observers noted a sharp drop on Friday in the number of explicit deepfakes being produced compared with earlier in the week. Image requests were still being approved, but only for X users with blue checkmarks tied to premium subscriptions costing $8 per month, which also offer higher chatbot usage limits.
The Associated Press confirmed Friday afternoon that the situation was inconsistent across platforms. While restrictions appeared active on X, free users could still access the image editing tool via Grok’s standalone website and mobile app.
In Europe, the partial clampdown did little to soften official criticism. Regulators stressed that limiting harmful content to paying users does not address the core problem.
“This doesn’t change our fundamental issue. Paid subscription or non-paid subscription, we don’t want to see such images. It’s as simple as that,” said Thomas Regnier, a spokesman for the European Union’s executive Commission, which had earlier denounced Grok’s conduct as “illegal” and “appalling.”
British officials reiterated that stance.
Grok’s changes are “not a solution,” said Geraint Ellis, a spokesman for Prime Minister Keir Starmer, who had warned the previous day that the government could take action against X.
“In fact, it is insulting to the victims of misogyny and sexual violence,” he said, arguing that the response shows X “can move swiftly when it wants to do so.”
“We expect rapid action,” he added, saying that “all options are on the table.”
READ ALSO: Historic Black communities in Memphis take on Elon Musk’s xAI over air pollution concerns
Starmer, speaking on Greatest Hits radio, said the platform needs to “get their act together and get this material down. We will take action on this because it’s simply not tolerable.”
Regulatory scrutiny is now widening. Media and privacy watchdogs in the U.K. said this week that they have contacted X and Musk’s AI company, xAI, seeking details on how the platform intends to comply with British law. Authorities in France, Malaysia, and India are also examining Grok’s conduct, while a Brazilian lawmaker has called for a formal investigation.
At the EU level, the European Commission has ordered X to preserve all internal documents and data related to Grok until the end of 2026 as part of a broader probe under the bloc’s digital safety legislation.
Grok remains free for X users to interact with as a text-based chatbot. Users can summon it by tagging Grok in their own posts or in replies to others. The tool debuted in 2023, and last summer expanded into image creation with the launch of Grok Imagine, which included a “spicy mode” capable of producing adult content.
Critics argue the controversy reflects both Musk’s deliberate positioning of Grok as a less constrained alternative to rival AI systems and the public nature of its outputs. Because Grok-generated images are visible on X, they can spread rapidly, amplifying harm before moderators or regulators can intervene.


