Hallucination
When an AI generates information that sounds plausible but is factually incorrect or made up.
Why AI Hallucinates
AI models don't know what they know. They predict plausible text based on patterns, not facts. When a pattern suggests something should be true, the AI might state it confidently even when it's wrong.
Common hallucination types:
- Inventing citations that don't exist
- Misremembering dates, numbers, or names
- Creating plausible-sounding but false technical details
- Attributing quotes to wrong people
How Serious Is It?
Hallucination rates vary by:
- Model quality (newer models generally hallucinate less)
- Task type (creative tasks vs factual queries)
- How obscure the topic is
- Whether the model has relevant training data
For casual use, occasional hallucinations are annoying but manageable. For high-stakes decisions, hallucinations can be dangerous.
Reducing Hallucinations
RAG: Give the AI real documents to reference instead of relying on training data.
Grounding: Connect the AI to authoritative sources it can verify against.
Prompting: Ask the AI to cite sources or say "I don't know" when uncertain.
Verification: Always fact-check AI outputs for important decisions.
Evaluating Tools
When reviewing AI tools, hallucination behavior matters. Does the tool admit uncertainty? Does it cite sources? How does it handle questions outside its knowledge? These factors separate reliable tools from risky ones.