AI risks: what can go wrong (and how to protect yourself)
We believe in honesty about AI limitations. This page covers the real risks, hallucinations, implementation failures, trust issues, all backed by credible research. Understanding these risks helps you make better decisions.
Despite the hype, AI implementation has a significant failure rate. According to aggregated industry research, 70–85% of AI initiatives fail to meet expected outcomes. In 2025, 42% of companies abandoned most of their AI initiatives, up from 17% in 2024.
70–85%
of AI initiatives fail to meet expectations
Industry Research 2025
42%
of companies abandoned most AI initiatives (2025)
Fullview Research
39%
of AI customer service bots reworked due to errors
CX Association 2024
These aren't edge cases, they represent the majority of AI implementations. The gap between AI hype and implementation reality remains substantial. Understanding why projects fail helps you avoid the same mistakes.
AI hallucinations: the accuracy problem
"Hallucination" is when AI generates confident-sounding but factually incorrect information. According to All About AI's Hallucination Report 2026, even the best models still hallucinate at least 0.7% of the time, with some going over 25%.
0.7%–25%+
hallucination rate range across models
Vectara Leaderboard 2025
77%
of businesses concerned about AI hallucinations
Industry Research 2025
47%
of enterprise users made decisions based on hallucinated content
All About AI 2024
76%
of enterprises now use human-in-the-loop processes
IBM AI Adoption Index 2025
Real-world hallucination examples
• Legal citations: A Stanford study found LLMs hallucinated at least 75% of the time when asked about court rulings, inventing case names and legal reasoning.
• Financial impact: Google's Bard chatbot cost Alphabet over $100 billion in market value in a single day after incorrectly claiming the James Webb Telescope took the first pictures of exoplanets.
• Customer service: Air Canada was ordered by a tribunal to honour a refund policy that their chatbot had invented, demonstrating legal liability for AI errors.
Why hallucinations happen
AI models don't "know" things the way humans do. They predict statistically likely next words based on patterns in their training data. This means:
They can't distinguish between facts and plausible-sounding fiction
They have no concept of "I don't know", they'll generate something
Confidence in tone doesn't correlate with accuracy
trust businesses to use AI ethically (down from 58% in 2023)
Industry Research 2024
77%
of Americans don't trust businesses to use AI responsibly
Survey Research 2025
72%
believe AI could spread misinformation
Gartner Research
25%
of US adults trust AI to provide accurate information
Survey 2025
This trust deficit has practical implications. Customers who feel deceived or misled by AI interactions are less likely to convert or return. Transparency isn't just ethical, it's good business.
Data readiness: the hidden barrier
One of the most common reasons AI implementations fail is poor data readiness. According to Gartner research (December 2024), 61% of companies report their data assets aren't prepared for AI deployment.
What "data readiness" actually means
For chatbots:
• Up-to-date FAQs and service information
• Consistent messaging across documents
• Clear answers to common questions
• Defined escalation scenarios
For RAG (document AI):
• Clean, well-structured source documents
• Consistent formatting and terminology
• Regular update process
• Clear content ownership
For SMBs, this is actually good news. You don't need massive datasets, you need clean, accurate information about your business. Often, preparing for AI implementation forces you to document and clarify things that were previously implicit.
Legal and compliance considerations
AI in customer-facing contexts raises legal questions that are still being resolved. Key considerations for Australian businesses:
Consumer law disclosure
Australian Consumer Law requires businesses not to mislead consumers. This likely includes disclosing when they're interacting with AI rather than a human. We implement clear disclosure by default.
Accuracy obligations
The Air Canada case (where a tribunal ruled the airline was bound by its chatbot's invented refund policy) demonstrates that businesses may be liable for AI-generated claims. Guardrails and human oversight matter.
Privacy Act considerations
The Privacy Act 2024 amendments strengthen data protection requirements. AI implementations should minimise data collection, have clear retention policies, and avoid sending sensitive data to third-party AI providers unnecessarily.
Important note
We're developers, not lawyers. This information is educational, not legal advice. For specific compliance requirements, consult a qualified legal professional.
How we mitigate these risks
We can't eliminate AI risks entirely, no one can. But we can reduce them significantly through careful implementation:
Transparency & disclosure
Clear disclosure that users are interacting with AI
Honest communication about AI limitations
Easy path to human support for complex queries
Accuracy controls
Train only on verified, up-to-date business information
Use RAG (document grounding) when accuracy is critical
Implement guardrails to prevent off-topic responses
Human oversight
Escalation paths for sensitive or complex topics
Regular review and optimisation of responses
Logging and monitoring for quality assurance
Privacy-first approach
Minimal data collection by default
Sensible retention policies
Option for Australian hosting where needed
When we'll tell you NOT to use AI
Part of honest guidance is knowing when AI isn't the right solution. We'll recommend against AI chatbots if:
You need 100% accuracy (legal advice, medical diagnosis)
Your information changes too frequently to maintain
Every enquiry is unique and requires human judgment
You're not willing to invest in setup and ongoing tuning
The topic is highly sensitive (crisis support, financial advice)
Simple automation would solve the problem more reliably
Sometimes the best AI advice is "don't use AI for this." We'd rather lose a sale than build something that creates problems for your business.
Want honest AI guidance for your business?
Book a free consultation. We'll assess your use case, discuss risks honestly, and only recommend AI if it actually makes sense for your situation.