Deloitte Faces Criticism for AI-Generated Healthcare Report

AI-Generated Healthcare

Pune, India | November 27, 2025

Deloitte, the global consulting firm, faces criticism after a recent healthcare report allegedly included citations that do not exist. A Canadian news organization reported that the provincial government of Newfoundland and Labrador commissioned the report for nearly US $1.6 million. Researchers, however, could not locate many referenced studies in recognized academic databases.

The 526-page report, submitted in May 2025, examined urgent issues such as virtual healthcare delivery, workforce shortages, and the ongoing effects of the pandemic on frontline health workers. Investigators found at least four references that did not correspond to legitimate journal articles or scholarly publications. Some citations listed real researchers who denied participation, while other references included authors who appear not to exist.

Experts warn that these errors reflect “hallucinations,” a phenomenon in which artificial intelligence produces content that sounds credible but is entirely fabricated. These mistakes undermine the reliability of reports meant to guide public policy. “When reports rely on false evidence, they jeopardize public trust and misdirect valuable resources,” said one health-policy analyst.

Deloitte Canada defended the report. The firm stated that it “firmly stands behind the recommendations put forward” and acknowledged minor citation errors. Deloitte emphasized that these mistakes do not compromise the report’s overall findings. The company said that its team selectively used AI to support a small number of research citations instead of having it author the entire document.

Critics remain skeptical. Many specialists argue that AI requires meticulous human supervision and verification for critical research tasks. Without proper oversight, reports can present fabricated data as credible information, creating serious risks in sensitive areas like healthcare.

This is not Deloitte’s first incident involving inaccurate reporting. Last month, its Australian branch admitted to similar issues in a government-commissioned welfare report. That document also contained fabricated academic references and a made-up court quotation. Deloitte partially refunded the Australian government following these revelations.

The recurring incidents raise broader concerns about using generative AI in high-stakes consulting and policy work. While AI can improve speed and efficiency, unmonitored use can produce unreliable outcomes. Experts stress that consulting firms must establish strict verification processes to maintain public confidence.

Currently, the Canadian government has not requested a refund or launched a formal investigation. The report remains available on a government website, although many experts suggest temporarily removing it until independent verification is complete.

This case highlights a critical principle: credibility in consultancy and public policy relies on accuracy. AI-generated content that bypasses rigorous checks can undermine even widely circulated, expensive reports. Such failures erode public trust and raise the stakes for future research assisted by artificial intelligence.

As AI becomes more common in research and policy consulting, its benefits must be balanced against potential risks. Consulting firms, government agencies, and researchers should prioritize verification and human oversight to prevent false information from spreading. Transparency about AI use in report preparation can help rebuild trust and maintain accountability.

The Deloitte case also serves as a cautionary tale for organizations exploring AI integration. The technology can generate content rapidly, but reliance on AI without proper scrutiny carries professional and ethical risks. Stakeholders should demand robust validation protocols and independent fact-checking before reports influence policy decisions.

Ultimately, the controversy emphasizes a key lesson: AI tools cannot replace careful human judgment. High-profile consultancy work affects public policy, resource allocation, and healthcare outcomes. Ensuring accuracy, especially when AI is involved, remains essential to protecting institutional credibility and sustaining public trust.

Leave a Reply

Your email address will not be published. Required fields are marked *