- Police AI falsely claimed an officer turned into a frog during a stop.
- The error shows how easily AI misinterprets background audio.
- Miswritten police reports can follow drivers for many years.
Traffic stops aren’t fun for anyone, but there’s a new issue that sounds as fake as the frog policeman in our lead image. In December, police in Heber City, Utah, had to explain why an AI-generated police report claimed an officer had literally turned into a frog. The culprit wasn’t a snarky intern or a rogue officer. It was artificial intelligence hallucinating.
More: A Legal Plate Sticker That Fools Police AI Cameras Could Still Send You to Jail in Florida
The department was testing AI-powered report-writing software that listens to body camera footage and automatically drafts police reports. Unfortunately, the system picked up audio from The Princess and the Frog playing in the background and confidently worked that into the official record.
“The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,'” Sgt. Keel told Fox 13. “That’s when we learned the importance of correcting these AI-generated reports.”
Now, that’s objectively hilarious, but it’s also a bit concerning. According to the news station, these AI tools are designed to save officers time by turning body cam audio into written reports. In theory, that means less paperwork and more patrol time.
When Software Writes the Story
In practice, it also means an algorithm is now interpreting conversations, tone, and background noise during roadside encounters, including traffic stops that can have long-term consequences for drivers.
It can be easy to view a traffic stop as a brief interaction, but records from those stops can be permanent. That report can influence future stops, court proceedings, insurance claims, license suspensions, and even employment background checks.
Put another way, when AI gets something wrong, it’s not just a typo. It’s misinformation baked into an official document.
Close Enough Isn’t Good Enough
In Heber City’s case, the mistake was obvious enough to laugh off. But what happens when an AI misunderstands who said what, misinterprets a driver’s tone, or incorrectly summarizes why a stop escalated? Those errors are far more problematic.
Not only might they be harder to spot, but will every officer correct language that is similar but perhaps more intense than it should be if it’s “close enough?”
For now, it seems that the best move for everyday drivers is to begin using a dashcam and other recording devices to ensure a record that AI doesn’t get to tamper with.
Requesting bodycam footage and reports through the Freedom of Information Act could prove vital as well. A frog in a report is fun, but your permanent record with legal authorities isn’t a game.

