Elon Musk’s Grok image generator has moved in a matter of weeks from viral novelty to a test case for something regulators are usually reluctant to do: suspend an AI system outright. The reason is not merely offensive or humiliating deepfakes of adults, but the emergence of sexualized images of children—or bodies that appear to be children—generated by a commercial system and distributed through a mainstream platform. Under U.S. and European law, that category of imagery can be illegal whether it comes from a camera, Photoshop or a neural network.
The Law Already Covers Synthetic CSAM
Child‑protection law has been converging on this moment for nearly two decades. In the United States, 18 U.S.C. § 1466A makes it a felony to knowingly produce, distribute, receive or possess “a visual depiction of any kind” that is obscene and depicts a minor, or what appears to be a minor, engaged in sexually explicit conduct—including computer‑generated images. Federal guidance and FBI public notices now state explicitly that child sexual abuse material created using generative AI or other manipulation technologies is illegal child‑exploitation material, not a loophole. In parallel, states including Texas and others have passed or proposed laws that specifically criminalize AI‑generated or digitally altered CSAM, closing much of the remaining ambiguity.
The “If a Human Did This” Test
If an individual downloaded a local image model and intentionally generated obscene images of what appear to be minors in sexual acts, that person could face charges under federal obscenity and child‑exploitation statutes and, in many states, under AI‑specific CSAM laws. The fact that the images were produced by a neural network rather than a camera would not shield them; prosecutors focus on the depiction and the intent, not the tool. That reality makes the “if a human did this” test more than rhetorical. There is no principled reason a commercial AI system that enables the same category of illegal output—at scale—should be allowed to continue operating while its owners experiment with partial fixes.
Regulators Are Starting to Draw Lines
Regulators have begun to indicate they see it the same way. In the U.K., Ofcom said it had urgently contacted X over reports that Grok could generate sexualised images of women and children, reminding the company of its obligations under the Online Safety Act. European officials, invoking the Digital Services Act, have described child‑like deepfakes tied to Grok as “appalling,” while Paris prosecutors have broadened an investigation into X that now encompasses alleged use of Grok to create or disseminate child sexual imagery. In the United States, three senators have urged Apple and Google to remove X and Grok from their app stores, arguing that allowing the apps to remain undermines the platforms’ own safety policies on sexual exploitation.
Why Incremental Fixes Aren’t Enough
X and xAI’s response has been incremental rather than categorical. After the initial backlash, X restricted Grok image generation on the main platform to paying users and tightened some explicit prompts. But testing and reporting indicate that the standalone Grok app and certain interfaces still allow sexualized edits—more revealing outfits, suggestive settings—without meaningful friction, and watchdogs have warned that the underlying model weights have not been fundamentally changed. In practice, that means a system implicated in generating child‑like sexual imagery has remained broadly available while guardrails are retrofitted onto a product already in the wild.
For child‑protection investigators and digital‑forensics experts, this posture is deeply concerning. Every day such a system continues to allow sexualized edits involving minors or child‑like bodies is another day new synthetic CSAM can enter circulation—material that can be used for grooming, traded alongside real abuse imagery, and deployed to complicate investigations involving actual victims. Analysts already face the challenge of distinguishing authentic abuse images from fabrications; when high‑profile platforms normalize photorealistic but invented abuse, the evidentiary field tilts toward doubt, making prosecutions harder and victims less protected.
Why Suspension Is the Responsible Response
This is why “shut it down until it is safe” is not an overreaction but a logical extension of existing law. When products pose an unreasonable risk of serious harm—unsafe vehicles, contaminated food, defective medical devices—regulators suspend them first and sort out fixes second. AI systems capable of generating sexual images of children or child‑like bodies should not be treated differently simply because the technology is new. The rule here is narrow and practical. If an AI system can still be prompted to generate images that depict, or appear to depict, minors in sexual situations, it is operating in a legal red zone, not a gray one—and the responsible course for platforms and regulators is to suspend that capability entirely until it cannot be used to manufacture synthetic child sexual abuse material at all.