When AI Goes Wrong: Lessons from a Deloitte Hallucination

In a world of Sora-generated videos and vibe working it's too easy to forget what can go wrong when basic responsible AI practices are missed.

“I instantaneously knew it was either hallucinated by AI or the world’s best kept secret because I’d never heard of the book and it sounded preposterous”. It was one well-read researcher who noticed that Deloitte Australia published a report with AI-generated hallucinations that none of the workers on the $290,000 project had caught.

The benefits of using AI in business are very real, but so are the risks. And if your adoption approach overemphasizes the upside while ignoring protecting against these downsides, then you are opening yourself up to the potential of being part of stories like this one.

It is to help businesses avoid these types of risks that I developed my Responsible AI for Business Users online training (linked below). Training alone cannot mitigate all the potential issues you can run into but having team members that are equipped with a thorough understanding of why AI tools can get stuff wrong is an important foundation to building an effective responsible approach to AI.

First posted on Linkedin on 10/13/2025 -> View Linkedin post here

Next
Next

A better way to build agents? Claude Agent Skills Tutorial + Demo