
In the hours after a fatal shooting involving Immigration and Customs Enforcement in Minneapolis, a familiar and dangerous pattern reappeared online: AI-generated images claiming to “reveal” information that no camera ever recorded. Social media users circulated synthetic portraits said to show the unmasked face of the shooter, presenting them as visual evidence rather than what they actually were — fabricated guesses produced by generative systems.
The images did not enhance hidden details or recover lost footage. They invented content. Prompted to “remove” a face mask from videos where the subject’s face was never visible, AI tools generated plausible-looking but entirely fictional faces. Those images were then treated as authentic frames, wrongly linked to real people, and spread at scale — illustrating how easily generative AI can transform speculation into something that looks like proof, especially during fast-moving, emotionally charged news events.
In this case, the online firestorm followed the Jan. 7 shooting of 37-year-old Renee Good by an ICE officer in Minneapolis. Witnesses’ videos and news footage show ICE agents wearing masks throughout the encounter, yet users still prompted generative systems like xAI’s Grok to “remove this person’s face mask.” The result was a photorealistic image with no basis in recorded visual data, which was then paired with a specific name and wrongly associated with at least two unrelated men — one a Missouri gun-shop owner and the other the publisher of the Minneapolis Star Tribune — who suddenly faced a wave of harassment for something they had nothing to do with.
How Generative AI Models Hallucinate Faces
As an expert in photo and video forensics who has testified in state, federal and international courts, I can tell you that core problem here is straightforward: If the camera never saw the face, there is no face to recover. AI can sharpen, interpolate, and in some cases plausibly fill in tiny gaps between known pixels, but it cannot conjure biometric detail that was never captured in the first place. Any system that claims to reveal a full face from a fully or mostly masked subject is necessarily inventing content, not uncovering it, no matter how convincing the result may look to a lay viewer.
Technically, this is exactly how modern image generators behave. When asked to “unmask” someone, they are not peeling away layers to reveal an underlying recording; they are sampling from patterns in their training data to hallucinate a statistically likely face that “fits” the visible hairline, skin tone, pose and lighting. As researcher Hany Farid put it in one analysis, AI “enhancement” has a tendency to hallucinate facial details, creating an image that may look visually clear yet be “devoid of reality” for identification. For forensic purposes, once generative content is introduced, the image no longer represents what a camera captured—it represents what a model imagined.
The risk is that these synthetic portraits borrow the visual authority of real photography. When an AI system outputs a smooth, high‑resolution “frame,” most viewers implicitly treat it as an enhanced version of the original video rather than a piece of fan fiction. In a volatile setting — an ICE shooting, a protest, a police raid — that confusion can rapidly pollute public understanding, misdirect anger at the wrong individuals, and contaminate potential jury pools long before any official identification process is complete.
The Real-World Harm Of AI Guesswork
From a professional photo and video forensics standpoint, any workflow that mixes generative AI with evidentiary imagery must draw a hard line between two categories. On one side are lawful clarifications of existing pixels: controlled contrast adjustments, de‑noising and resolution‑preserving interpolation that can be explained, reproduced and audited. On the other side are generative edits — face swaps, mask removals, clothing changes or “completions” of occluded areas — that create new, unobserved content. The first category can sometimes be appropriate in investigations when carefully documented; the second has no place in establishing identity or reconstructing events, because it breaks the chain between what the sensor captured and what the viewer sees.
The Minneapolis case shows how easily that boundary can be crossed when consumer AI tools are pointed at breaking news footage. Local and national outlets have already warned readers that viral “unmasked” images of the ICE agent are AI-generated and should not be treated as evidence of who pulled the trigger. For courts, investigators and the public, the rule of thumb should be blunt: If an AI image claims to show a face, object or action that no camera recorded, it is not an enhancement—it is an invention, and it should not be treated as evidence.