Definition
AI Hallucination
An AI hallucination is a confident-looking output that is false, unsupported or not grounded in the provided context.
Short definition
An AI hallucination is an output that sounds plausible but is wrong, invented or unsupported. The term is most often used for language models that produce false facts, fake citations or incorrect reasoning with a confident tone.
How it happens
Language models generate likely text, not guaranteed truth. If the prompt is vague, the model lacks relevant context or the task requires exact facts, it may fill gaps with plausible patterns learned during training.
Example
A model might summarize a legal case and include a citation that does not exist. The answer may look polished, but the source is fabricated.
Why it matters
Hallucinations are one of the main reasons AI outputs need evaluation and review. Mitigation strategies include RAG, citations, constrained outputs, tool checks and human approval in high-risk workflows.