AI scandal rocks Deloitte after error-filled report to Australian government
AI scandal rocks Deloitte after error-filled report to Australian government

Deloitte is facing intense scrutiny in Australia after delivering a government-commissioned report riddled with errors—many of them traced back to generative artificial intelligence. The consulting firm had charged the Australian Department of Employment and Workplace Relations nearly 440,000 Australian dollars (about €251,000) for the analysis, which was meant to evaluate an automated welfare sanctions system. Following public backlash, Deloitte has pledged to reimburse part of the fee, though it hasn’t disclosed how much.

The controversy erupted shortly after the report’s release, when academics and experts began to flag glaring inconsistencies. In Sydney, university researcher Chris Rudge uncovered over twenty fake references, nonexistent quotes, and even a made-up legal decision cited as real. These weren’t minor footnotes—some of them supported the report’s key arguments.

It soon became clear that a generative AI model—specifically OpenAI’s GPT-4o, accessed via Microsoft Azure—had been used to help write parts of the document. What was missing, however, was any meaningful human oversight. That lack of verification allowed AI-generated fabrications to slip through unchecked.

Deloitte has since released a revised version of the report, stripping out the problematic citations and attempting to assure the public that the main conclusions remain valid. But critics aren’t buying it. The fact that those conclusions were originally backed by invented sources has severely damaged the report’s credibility.

The Australian government confirmed that the mistakes stemmed from AI usage and admitted to making extensive corrections across several pages. Senator Barbara Pocock, a member of the Greens, has called for a full refund, slamming what she describes as Deloitte’s reckless reliance on artificial intelligence.

The backlash is particularly damaging given Deloitte’s positioning as a global leader in AI strategy. The firm regularly advises major companies and public bodies on how to responsibly implement automation and generative technologies. Now, it finds itself accused of failing to meet the standards it promotes.

This incident has reignited a broader debate over the role of AI in high-stakes government work, where flawed data and unverifiable claims can have real-world consequences. It also highlights a growing concern: when artificial intelligence is used without proper human checks, even the most trusted institutions can deliver dangerously unreliable results.