Lars Daniel's Portfolio

Lars Daniel

January 04, 2026

Article at Forbes

View original

Cop Transforms Into Frog, According To AI-Generated Police Report

frog eating a fly
Cop transforms into frog, according to AI-generated police report.
getty

Updated, Jan. 5 at 12:36 p.m. ET: Axon has been contacted for comment.

A December test of AI report-writing tools in Heber City, Utah, produced a police report claiming an officer had shape-shifted into a frog after misinterpreting background audio from the Disney film The Princess and the Frog, forcing the department to explain how such a basic reality check failed.

As more U.S. agencies quietly adopt generative AI systems like Axon’s Draft One to turn body-camera audio into official reports, the central question is whether any promised efficiency gains outweigh the new risks to accuracy, accountability and due process.​

Heber City began piloting Draft One and a rival product called Code Four in December, pitching them as ways to cut paperwork so officers could spend more time on the street. In practice, even a staged traffic-stop demo produced a “fairly normal” report that still needed multiple corrections, while the earlier frog report highlighted how easily background chatter could be misread as fact when AI is treated as a neutral stenographer rather than an unreliable narrator.

Sgt. Rick Keel told local TV the tools now save him six to eight hours a week, which captures the tradeoff: meaningful time savings anchored to a system that can, in the same breath, turn an officer into an amphibian.​

Vendors frame this tradeoff as a simple efficiency upgrade. Axon, which launched Draft One in early 2024, markets the system as a way to automatically generate “high-quality draft report narratives in seconds” from body-camera audio, cutting report-writing time roughly in half and freeing officers for “real police work.” The company’s own blog stresses safeguards, from required human editing to an audit trail that records who generated what, when and from which evidence, and says default settings limit early use to minor incidents. On paper, that sounds like the perfect compromise. Let machines handle the boring paperwork while humans keep control of the facts that matter.​

But independent investigators say the real-world deployment often looks very different from the marketing. An Electronic Frontier Foundation investigation published in July 2025 concluded that Draft One “seems deliberately designed to avoid audits,” finding that it is often impossible, even for departments themselves, to tell which parts of a report were written by AI and which by an officer. Public-records requests showed the tool being used across a wide range of cases, including kidnapping and assault, with more than 900 reports generated for one Utah department alone between September 2024 and April 2025, despite a feature that would allow agencies to restrict its use for serious crimes. In a separate analysis, Mother Jones reported that agencies were disabling oversight tools and allowing Draft One to touch serious felony cases, expanding the technology’s footprint without matching transparency.​

The frog incident crystallizes another problem: hallucinations and subtle distortions can slip into the official record long before anyone spots them. In Heber City, the mistake was absurd enough to be caught — a shape-shifting officer is hard to miss, especially when local TV is riding along. The risk is less theatrical when the AI mishears a street name, condenses a chaotic scene into a cleaner narrative or smooths over uncertainty in ways that later benefit the prosecution in court.

As American University law professor Andrew Ferguson warned in 2024, the ease of automation can nudge officers to become less careful with their writing, treating the AI-generated narrative as a default version of the truth rather than a draft that needs rigorous checking.​

The Electronic Frontier Foundation argues that because it is so difficult to see where AI begins and an officer’s edits end, Draft One can erode accountability: if a key misstatement appears in a report, the officer can blame the tool, while the vendor points back to the officer’s sign-off. In California, these concerns have helped drive support for legislation like S.B. 524, which would rein in AI-written reports and require clearer disclosures when generative tools are used.​

For departments under pressure to do more with fewer officers, the math is tempting: Axon’s own research suggests officers spend up to 15 hours a week on reports, and Heber City’s sergeant says AI has clawed back a full workday of that time. But that calculus ignores who carries the risk when hallucinations, transcription errors or biased phrasing make it into the official record: not just the department, but defendants, victims and communities whose cases hinge on what a report says happened in the first chaotic minutes of an encounter. Until agencies can demonstrate that they can track AI use, audit its impact and guarantee that every AI draft is treated as a fallible starting point — not a shortcut to the truth — the frog in Utah looks less like a funny glitch and more like a warning about where this technology is hopping next.

;