Glossary/AI Hallucination
AI Foundations

AI Hallucination

When an AI model generates confident, plausible-sounding but factually incorrect or fabricated information.

Definition

AI hallucination refers to the phenomenon where a large language model generates text that is factually incorrect, fabricated, or inconsistent with the provided context, while presenting it with apparent confidence. Hallucinations arise because LLMs are trained to generate statistically plausible text, not to verify factual accuracy. They may invent citations, misstate statistics, confuse similar entities, or generate internally consistent but false narratives.

Why it matters in 2026

AI hallucination is the primary barrier to enterprise AI adoption in high-stakes domains. Legal, medical, financial, and regulatory AI applications cannot tolerate fabricated information. The semantic AI stack — RAG systems grounded in verified knowledge bases, semantic layers that provide precise data definitions, and knowledge graphs that constrain AI reasoning — is the primary technical approach to hallucination mitigation. Organizations that have deployed semantic grounding are reporting 60-80% reductions in hallucination rates.

How it works

Hallucinations are mitigated through several techniques: RAG (grounding responses in retrieved documents), semantic layers (providing precise data definitions that prevent misinterpretation), knowledge graph grounding (constraining AI outputs to verified facts), constitutional AI (training models to self-check for accuracy), and output verification (using separate models or rule systems to verify generated claims). No single technique eliminates hallucinations entirely; defense-in-depth is required.

Real-world example

A legal AI assistant hallucinates a citation: 'Smith v. Jones, 2019, 9th Circuit, held that...' — a case that does not exist. In a RAG-grounded system, the AI is restricted to citing only documents in the verified legal database. When asked about a case, it retrieves the actual case document and summarizes it — it cannot fabricate a citation because it can only cite documents it has actually retrieved.

Related Terms

4 terms
Browse all 46 terms →

Further Reading